Braun and Clarke 2022 Chapter 8

0.0(0)
studied byStudied by 0 people
0.0(0)
call with kaiCall with Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/69

flashcard set

Earn XP

Description and Tags

Understanding similarities and differences between thematic analysis and its methodological siblings and cousins.

Last updated 3:39 AM on 1/27/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

70 Terms

1
New cards

What is thematic analysis (TA)?

It is a method of qualitative analysis, widely used across the social and health sciences, and beyond, for exploring, interpreting and reporting relevant patterns of meaning across a dataset. It utilizes codes and coding to develop themes. Canadian sports researchers Lisa Trainor and Andrea Bundon (2020) state that TA “simply cannot be simplified; it is a complex and beautiful method with so many options” (p. 1).

2
New cards

What are some myths or misunderstandings of TA?

  1. There is a singular method called TA.

  2. TA is not actually a distinct method but a set of generic analytic procedures (e.g. Flick, 2014a; Pistrang & Barker, 2010).

  3. TA is only a very ‘basic’ or unsophisticated method (e.g. Crowe, Inder, & Porter, 2015); it needs another method to bring interpretative depth (such as grounded theory).

  4. TA is only a summative or descriptive method (e.g. Aguinaldo, 2012; Floersch, Longhofer, Kranke, & Townsend, 2010; Vaismoradi, Turunen, & Bondas, 2013).

  5. TA is only an inductive method.

  6. TA is only an experiential method (e.g. Flick, 2014a; 2014b).

  7. TA is only a realist or essentialist method.

  8. TA is atheoretical (e.g. Crowe et al., 2015; Flick, 2014a; Vaismoradi et al., 2013).

  9. The proper way to do TA is…

  10. There are no guidelines for how to do TA (e.g. Nowell et al., 2017; Xu & Zammit, 2020).

  11. It’s difficult to judge the quality of TA.

3
New cards

What are some responses to those myths or misunderstandings of TA?

  1. TA is not a singular method; there are many different versions of and approaches to TA. Some are idiosyncratic to one researcher/team; some are widely used. The differences between approaches can be significant.

  2. This feels a bit like "‘splitting hairs’. TA (in different versions) is a widely used method in its own right and offers a robust and coherent method of data analysis.

  3. The sophistication - or not - of the analysis depends on the use of the method, not the method itself. TA can be used to produce a sophisticated, nuanced, insightful analysis, just as other approaches such as grounded theory can (likewise, all analytic approaches can produce poor quality analyses if used badly).

  4. TA should be used to do more than provide data summary or data reduction. A more descriptive analysis is only one of the many types of analysis TA can produce. Descriptive analyses can’t just be data reduction, as even descriptive approaches involve interpretation by an inescapably subjective researcher.

  5. TA works very well for producing inductive - or data driven - analyses; it also works well for producing latent and deeply theoretical analyses.

  6. TA can work well for research seeking to understand people’s subjective experiences or perspectives. Some versions - including reflexive TA - work equally well for critical qualitative research and the analysis of data that doesn’t center on “subjective viewpoints” (Flick, 2014a, p. 423).

  7. TA can be essentialist or realist, but some versions - including reflexive TA - can also be constructionist or critical.

  8. That TA is independent of an inbuilt theory (making it more method than methodology) has meant some interpret this to mean theory can be ignored. This is a mis-reading; theory must be considered, as it’s always there, even if not articulated. In TA, the researcher must select the theoretical and conceptual assumptions that inform their use of TA. Although most approaches to TA claim to be theoretically independent or flexible, some approaches are more flexible than others, and all reflect broad paradigmatic assumptions that shape the analytic procedures and the conceptualization of core constructs such as the ‘theme’.

  9. There are many variations of this myth, which claim a particular methodological tool or technique is something that all TA should do (e.g. consensus coding; early theme development). The various variations of TA often have distinctive techniques for doing TA, reflecting their quite different conceptual foundations.

  10. There are many guidelines for how to do TA, including many from Braun and Clarke on how to do reflexive TA!

  11. This ranges from claims that researchers ‘bias’ the research, to the bizarre suggestion that TA researchers ‘leave out’ inconveniently disagreeing data (Aguinalda, 2012), to the claim that there aren’t good discussions of quality and standards from which to judge quality. There are!

4
New cards

What is the most satisfying explanation for the origins of TA?

The most satisfying explanation for the origins of TA is that it developed from qualitative refinements of content analysis. Some researchers early in the history of TA used the term TA interchangeably with ‘content analysis’ to describe their analytic techniques, and many others continue to do so (see Braun & Clarke, 2021b). The use of the term ‘thematic content analysis’ is still relatively common (e.g. Brewster, Velez, Mennicke, & Tebbe, 2014; J. Green & Thorogood, 2009). British-based psychologist Helene Joffe (2012) proposed that TA developed from content analysis, and this does seem to be the most satisfying explanation for how TA came about.

5
New cards

What is currently the most useful way to differentiate the field of TA?

Currently, Bran and Clarke regard a tripartite division between clusters of similar approaches to TA the most useful way to differentiate the field. Fugard and Potts (2020) described TA as a family of methods. You can imagine each ‘cluster’ as a branch of a family tree descended from an original pair of parents - each cluster made up of one sibling or cousin, and their collected family members.

6
New cards

What is ‘Big Q’ qualitative and ‘small q’ qualitative paradigms?

The term ‘qualitative research’ describes a range of research practices - from the use of qualitative techniques of data collection and analysis within a postpositivist paradigm (small q qualitative) to the use of these within a qualitative paradigm or Big Q qualitative. (Post)positivism is the paradigm that generally underpins quantitative research. It emphasizes objectivity and the possibility of discovering universal truths independent of the methods used to discover those truth. A qualitative paradigm emphasizes the multiple and contextual nature of meaning and knowledge and researcher subjectivity as a resource for research (Braun & Clarke, 2013). To make the landscape of TA extra confusing, TA researchers don’t always acknowledge which conceptualization of qualitative research underpins their research or they seem to draw on a ‘mash-up’ of both understandings, often without awareness or discussion (see Braun & Clarke, 2021c).

7
New cards

How do Big Q or ‘fully qualitative’ research and small q or ‘qualitative positivism’ tend to operate?

In Big Q or ‘fully qualitative’ research, processes tend to be flexible, interpretative, and subjective/reflexive. Big Q rejects notions of objectivity and context-independent or researcher independent truths, and instead emphasizes the contextual or situated nature of meaning, and the inescapable subjectivity of research and the researcher (Braun & Clarke, 2013). Small q qualitative or ‘qualitative positivism’ (Brinkmann, 2015; W. L. Miller & Crabtree, 1999) often involves a more structured approach to data collection and analysis, guided by concern to: (a) minimize the researcher’s influence on the research process - conceptualized as ‘bias’ - and (b) achieve accurate and objective results. Such aspirations and practices are incompatible with a Big Q approaches. These conceptual and practice-based differences inform our tripartite clustering of TA methods. TA methods typically differentiate between coding - the way to your destination - and the theme - the destination.

8
New cards

What is ‘coding’?

Coding is a process common across TA and indeed many other qualitative analytic approaches. Through close data engagement, data meanings are tagged with code labels. Within some approaches to TA, coding is only conceptualized as a process - the process through which themes are identified. The point of coding is to find evidence for themes. Within other approaches, including reflexive TA, the ‘code’ exists as an analytic entity in its own right, a ‘product’ that results from early phases of analytic development. Coding is not just the process for theme development; coding is the process for generating codes and code labels, and tagging data with code labels. For methodologists like Braun and Clarke who view a code as an analytic entity and output, codes are conceptualized as the ‘building blocks’ of analysis - themes are developed from codes, and thus represent a second ‘level’ of data analysis.

9
New cards

What is a ‘theme’?

Richard Boyatzis’s definition of a theme is widely cited: “a theme is a pattern that at the minimum describes and organizes possible observations or at the maximum interprets aspects of the phenomenon” (1998, p. vii). This singular definition belies the use of two vastly different conceptualizations of ‘themes as patterns’. Braun and Clarke conceptualize themes as patterns of meaning (e.g. concepts, ideas, experience, sense-making) that are underpinned and unified by a central idea. This central idea, concept or meaning that unites or helps a theme together is sometimes quite explicitly expressed (a ‘semantic’ theme) and sometimes quite conceptually or implicitly evidenced (a ‘latent’ theme). In the latter instances, the data that evidence the patterning of the theme might appear quite disparate.

10
New cards

What are several key features of themes, which are a good fit with how we conceptualize themes, that U.S. nursing scholars Lydia DeSantis and Doris Ugarriza (2000) identified?

  • Themes are actively produced by the researcher, they don’t “spontaneously fall out” (p. 355) of data;

  • Are abstract entities, often capturing implicit meaning ‘beneath the surface’ but can be illustrated with more explicit and concrete data;

  • Unite data that might otherwise appear disparate, or unite meaning that occurs in multiple and varied contexts;

  • Explain large portions of data, and are built from smaller meaning unites (codes);

  • Capture the essence of meaning;

  • Are different from data domains, but can explain and unite phenomena with a domain; and

  • Are recurrent.

11
New cards

What are the outcomes of ‘themes’?

A theme is an outcome of coding, not something that is, in itself coded. Like the code, the theme is also conceptualized as an analytic output, distinct and developed from smaller meaning units (codes). These are predominantly conceptualized in two different ways: as topic summaries or as patterns of shared meaning underpinned by a central organizing concept. Understanding this distinction is key for doing good (reflexive) TA.

12
New cards

What are ‘topic summaries’?

There is a second prevalent way ‘theme’ is used in TA. Such ‘themes’ do not fit the conceptualization of a theme. These are called ‘topic summaries’. This name evokes what such ‘themes’ effectively capture: the diversity of responses to a topic, issue, or area of the data repeatedly spoken or written about. Topic summaries do not evidence meaning organized around a central idea or concept that unites the observations; instead, the analysis reports ‘everything that was said about X’. Sometimes these topics reflect the very questions participants were asked to respond to during data collection (e.g. the main interview questions). For these reasons, topic summaries have been characterized as instances of poorly realized or underdeveloped analysis (Braun & Clarke, 2006; Connelly & Peltzer, 2016). Topic summaries typically focus on surface-level or descriptive meaning. They are often named with a single term that captures the topic or focus (e.g. ‘emotional’, ‘behavioral’, ‘diagnosis’, ‘cognitive’, ‘body’ and ‘intersubjective’, in Floersch et al., 2010) or are rather general (e.g. ‘Perceived risks and benefits associated with conventional cigarettes versus e-cigarettes’, in Roditis & Halpern-Felsher, 2015).

13
New cards

What are two main uses of a ‘theme’?

  • Shared-meaning themes

  • Topic summaries

14
New cards

What is researcher subjectivity (reflexivity)?

Big Q research can be considered reflexive research, where knowledge is situated, and inevitably and inescapably shaped by the processes and practices of knowledge production, including the practices of the researcher (Finlay, 2002a, 2002b; Gough, 2017). Research within a qualitative paradigm values reflexivity, subjectivity, and indeed the contextual, partial and located nature of knowledge. Conceptualizations of researcher subjectivity vary from framing it as bias with the potential to distort the ‘accuracy’ of coding through to viewing it as an essential resource for analysis. Within postpositivist or small q research, such elements are framed as bias or even as a contaminant; their impact needs to be managed and minimized.

15
New cards

What are some different concerns at play related to subjectivity?

Across the varieties of TA, there are quite different concerns at play related to subjectivity, connected to what counts as the ‘best’ kind of knowledge. Relating to paradigmatic locations, this ranges from (ideally) bias-free, accurate and reliable knowledge at the small q/postpositivist end of the spectrum, through to situated and contextualized, subjective knowledge at the Big Q end of the spectrum. These epistemological positions produce distinct ‘quality’ practices, such as: (a) trying to manage the potential for researcher subjectivity to ‘bias’ and distort coding accuracy, through consensus coding and measuring inter-coder agreement; to (b) the research using a reflexive journal to reflect on their assumptions, and acknowledging the situated, partial and subjective nature of coding.

16
New cards

What are two ways that TA practice can be broadly conceptualized?

TA practice can broadly be conceptualized in two ways: (a) a process where the researcher identifies existing-in-the-dataset patterns of meaning; (b) a process where the researcher as a situated, subjective and skilled scholar, brings their existing knowledges to the dataset, to develop an understanding of patterned meaning in relation to the dataset. Depending on how the process of theme creation is conceptualized, themes can be understood as inputs into the analytic process, developed early on, or outputs from the analytic process, developed later on. Process A is most aligned to small q qualitative - a process for coding is required that rigorously and robustly identifies themes in a reliable way. These are effectively analytic ‘inputs’ that coding provides the evidence of. Process B is most aligned to Big Q qualitative - practices for coding and theme development need to reflexively grapple with researcher subjectivity, with theme development positioned as an active process. Themes are best conceptualized as ‘outputs’ of this analytic process.

17
New cards

How can we get our themes in TA? What are two different conceptualizations of the process?

  • Process A (theme identification; themes as inputs)

    • Data - identification of themes - coding for theme identification as evidenced in the dataset).

  • Process B (theme development; themes as outputs)

    • Data - coding to explore and parse meaning - theme development and refinement (from codes and dataset).

18
New cards

What is meant behind the phrase ‘themes do not emerge’?

This active process of reflexive engagement with data for theme development is captured by a phrase Braun and Clarke have become (in)famous for: themes do not emerge. This is not their original idea. Instead, it reflects a wider critique of the influence that analysis is a passive process of discovery. The notion that themes passively emerge from data also denies the active role the researcher plays in developing and reporting themes (Taylor & Ussher, 2001). Braun and Clarke still love this description, that the language of ‘themes emerging’: Can be misinterpreted to mean that themes ‘reside’ in the data, and if we just look hard enough they will ‘emerge’ like Venus on the half shell. If themes ‘reside’ anywhere, they reside in our heads from our thinking about our data and creating links as we understand them. (Ely, Vinz, Downing, & Anzul, 1997, pp. 205-206).

19
New cards

What are the three clusters that outline the tripartite typology for classifying forms of TA, based on what Braun and Clarke regard as characteristic philosophical assumptions and analytic procedures?

  • Reflexive TA - capturing approaches situated within a Big Q framework.

  • Coding reliability TA - capturing approaches situated within a small q framework.

  • Codebook TA - capturing approaches situated within what we characterize as a MEDIUM Q framework.

20
New cards

What is ‘Big Theory’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - Closer to a method than a methodology. Theoretically independent or flexible - with a wide range of theoretical positions possible critical realist; relativist; constructionist.

  • Coding reliability (Boyatzis, 1998) - Not its own method(ology); a process used as part of many qualitative methods. Theoretically flexible, but claimed flexibility limited by postpositivist assumptions.

  • Codebook (Template analysis; N. King, 2012) - A technique (method) not a methodology. A theoretically flexible or independent - realist, critical realist (phenomenology), constructionist (broad patterns of discourse only).

21
New cards

What is a ‘theme’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - Patterned meaning across dataset. Relevant to the research question. United by shared idea/concept. Can have semantic or latent focus. Themes actively generated by the researcher, not discovered.

  • Coding reliability (Boyatzis, 1998) - A pattern that described and organizes meaning; potentially also interprets. Can be identified at the manifest (directly observable) or latent (underlying) level. Themes identified by the researcher.

  • Codebook (Template analysis; N. King, 2012) - Recurrent features of data relevant to the research question. Themes reflect data topics (e.g. intergenerational issues), rather than storied, conceptional patterns. Themes created by the researcher, not discovered.

22
New cards

What is a ‘code/coding’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - Coding is an organic and evolving process of noticing potentially relevant meaning in the dataset, tagging it with a code, and ultimately building a set of codes from which themes are developed. Codes (analytic ‘outputs’) have ‘labels’ that evoke the relevant data meanings. Can focus on meaning from semantic to latent levels.

  • Coding reliability (Boyatzis, 1998) - Coding is a process to identity themes using a predetermined set of codes, organized with a codebook. Codes (analytic tools) are developed from themes, as a way to identify each theme. Codes and themes are sometimes used interchangeably (e.g. thematic codes).

  • Codebook (Template analysis; N. King, 2012) - Coding is a process to identify evidence for patterns (themes). Codes (analytic tools) are labels applied to data to identify it as an instance of a theme. Codes can be descriptive and interpretative. Codes and themes are organized hierarchically and sometimes laterally, into a layered template that guides coding for theme identification.

23
New cards

What is ‘Analytic orientation’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - Works for more inductive (‘bottom up’) to more deductive (‘top down’) orientation.

  • Coding reliability (Boyatzis, 1998) - Inductive or deductive; inductive the “least frequently used and probably the least understood" (p. x).

  • Codebook (Template analysis; N. King, 2012) - A middle-ground: can be used inductively, but mainly deductive(ish), with a priori themes tentative, and can be redefined or discarded.

24
New cards

What is ‘Analytic process’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - Organic process; starts from familiarization; coding and recoding; theme development, revision and refinement in relation to the coded data and then the full dataset. Themes as analytic ‘outputs’.

  • Coding reliability (Boyatzis, 1998) - Development of themes and codes from priori theory and research or inductively; construction of codebook; application of codebook to data by multiple coders to find evidence for themes; testing coding reliability. These as analytic ‘inputs’.

  • Codebook (Template analysis; N. King, 2012) - Some a priori themes developed first from interview guide/literature; coding to evidence these and other themes; themes and codes refined after some initial coding; development and refinement of template; coding guided by template for final theme development. Themes as analytic ‘inputs’ - but they can also evolve and new ones can be developed, so can also be ‘outputs’.

25
New cards

What is ‘Researcher subjectivity’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - A resource to be utilized; the researcher is both active and positioned.

  • Coding reliability (Boyatzis, 1998) - A ‘risk’ to the validity and quality of the analysis. Needs to be ‘controlled’ and minimized.

  • Codebook (Template analysis; N. King, 2012) - Acknowledged and accepted. Reflexivity encouraged.

26
New cards

What is ‘Quality’ according to Reflexive TA, Coding Reliability, and Codebook?

  • Reflexive (Braun & Clarke, 2006) - Deep questioning data engagement and a systematic, rigorous analytic process. Analysis moves beyond summary or paraphrasing. Researcher reflexivity and explication of choices.

  • Coding reliability (Boyatzis, 1998) - Multiple independent coders. Inter-rater reliability. Reliability conceptualized as consistency of judgement in applying codes to identify themes.

  • Codebook (Template analysis; N. King, 2012) - Participant feedback. Audit trails. Multiple researchers code and compare. Measures of inter-coder reliability not recommended.

27
New cards

How are the approaches to TA we call coding reliability united?

The approaches to TA we call coding reliability are united by adherence to (post)positivist notions of reliability and a sometimes-implicit, sometimes-explicit, aim of seeking unbiased, objective truth from qualitative data. Although claimed as theoretically flexible, such approaches are delimited by the values of this broader paradigm. These approaches to TA typically share with codebook approaches a more structured coding practice than in reflexive TA. Coding is guided by a tool - a codebook or coding frame. In coding reliability forms of TA, the coding process culminated in a measure of ‘coding reliability’, which is used to assess the ‘accuracy’ or ‘reliability’ (hence the name ‘coding reliability’) of the coding process.

28
New cards

How is coding in small q TA conceptualized?

Coding in small q TA is conceptualized as a process, but codes themselves are not typically conceptualized as an analytic entity, an analytic output that is distinct from a theme. For instance, Guest et al. (2012) are unusual in defining ‘code’ but their definition demonstrates this blurriness: “A textual description of the semantic boundaries of a theme or a component of a theme. A code is a formal rendering of a theme” (p. 279). Procedures in small q TA center on the development of a codebook or coding frame, which is used to code for instances of themes (typically topic summaries). This of course means themes are developed before coding and are conceptualized as analytic inputs (as well as outputs - what is reported). The codebook typically consists of a definitive list of codes. For each code, there is a label, definition, instructions on how to identify the code/theme, details of any exclusions, and examples (Boyatzis, 1998).

29
New cards

What are two ways that the codebook tends to be developed?

The codebook tends to be developed in one of two ways:

(1) deductively from pre-existing theory or research; or

(2) inductively, following data familiarization.

30
New cards

How is the codebook developed for a deductively-derived approach?

For a deductively-derived approach, the codebook is developed independently from the data, requiring little or no engagement with the dataset prior to coding. It is then ‘applied’ to the dataset - the researcher decides which sections of data are instances of particular codes/themes. Thus, the coding process conceptually represents a search for ‘evidence’ for themes. Such an approach echoes the scientific method - a researcher starts with a theory, develops a hypotheses (themes) and conducts an experiment (coding) to test (find evidence for) the hypotheses (themes).

31
New cards

How is the codebook developed for an inductively-developed codebook?

For an inductively-developed codebook, the researcher familiarizes themselves with some (or occasionally all) of the data and then creates a codebook to be applied to the whole dataset to identify evidence of themes. Inductive approaches to codebook development in small q TA were originally described as the “least frequently used and probably the least understood” (Boyatzis, 1998, p. x), but more recent texts have emphasized “inductive analyses, which primarily have a descriptive and exploratory orientation” (Guest et al., 2012, p. 7).

32
New cards

How is the concern for ‘accuracy’ in coding ameliorated and applied to the codebook?

Such inductive orientations to codebook development retain a similar ‘evidencing’ ethos to deductive approaches, and a concern for ‘accuracy’ in coding. Quality procedures in coding reliability TA center on the need for accurate and reliable coding. The ‘threat’ researcher subjectivity poses to reliability is managed through the use of multiple coders, measuring inter-coder agreement and consensus coding. To this end, small q approaches emphasize the need for multiple coders to apply to the codebook to the dataset. These coders are ideally ‘blind’, in the sense of having no prior knowledge of the topic area and/or being informed of the research question. The conceptual framework for multiple coders is one of consensus - whereby two or more coders coding the same piece of data in the same way is evidence of ‘accuracy’ and therefore quality.

33
New cards

What is a key quality requirement of agreement?

‘Coding reliability’ - a measure of the extent of agreement between different coders - is a key quality requirement. After multiple researchers have independently coded the data using the codebook, the level of ‘agreement’ between the coders will be calculated using one of a number of statistical tests (see O’Connor & Joffe, 2020; Yardley, 2008). Such assessments of inter-coder agreement determine whether coders applied the same code(s) to the same segments of data to a sufficiently high level that coding can be considered to be reliable. If a sufficient level of agreement is not reached, resolution occurs through recoding.

34
New cards

What is the key rationale for the quality procedures of the codebook?

One of the key rationales for such quality procedures is that the subjectivity that individual researchers bring to the data collection has to be managed. Overtly conceptualized as ‘bias’, researcher subjectivity is a threat, not a resource, for small q TA.

35
New cards

What do Braun and Clarke think is problematic about coding reliability approaches to TA?

There is nothing inherently ‘wrong’ with coding reliability approaches to TA! As long as your TA project is done in a way that is conceptually coherent, the analytic practices align with your purpose and framework, and the whole process is undertaken with integrity (see Braun & Clarke, 2021c, and Chapter Nine). That acknowledged, Braun and Clarke state that these approaches are not for them.

36
New cards

What is one critique of small q TA approaches, such as coder reliability?

Coding reliability approaches have a coherent internal logic, but it’s one that is fundamentally incompatible with Big Q qualitative research values. By prioritizing postpositivist conceptions of reliability, small q approaches mandate assumptions and practices at odds with qualitative research values (e.g. Grant & Giddings, 2002; Kidder & Fine, 1987; Madill & Gough, 2008). The idea that “themes are implicitly conceptualized as entities that pre-exist the analysis; analysis is about identifying or finding these themes in the data” is a problematic conceptualization for many qualitative researchers. Not only because it rests on a whole set of - rarely acknowledged - theoretical assumptions, but because it renders researcher subjectivity at best invisible and at worst problematic. Take the claim that coding is better, that is, more accurate or objective, when two or more coders agree. The main concern here is to control for subjectivity. Hence, knowledge of previous research is positioned here as a potential contaminant, something that risks distorting a researchers coding. The idea that coding can be distorted depends not only on a realist idea of singular truth, but also on idea(l)s of ‘discovery’ of that truth. Within coding reliability TA, coding is conceptualized as something that can be accurate (finding diamonds) or not (finding glass from broken bottles). Furthermore, instead of conceptualizing researcher subjectivity - including their pre-existing knowledge of the topic - as valuable, it is viewed as a potential barrier to good coding.

37
New cards

What is another critique of small q TA approaches, such as coder reliability?

Another critique relates to the codes themselves and the limiting of analytic depth. The types of codes amenable to use within structured codebook approaches, including measurement of coding agreement, are often relatively coarse, superficial, or descriptively concrete. The critique US qualitative nursing researcher Janice Morse made over two decades ago still applies: “maintaining a simplified coding schedule for the purposes of defining categories for an inter-rater reliability check will simplify the research to such an extent that all of the richness attained from insight will be lost” (Morse, 1997, p. 446). We’d also ask: can ‘unknowledgeable’ coders effectively add such depth, insight and creativity? Moreover, as Irish and British psychologists Cliodhna O’Connor and Helene Joffe (2020) highlighted, there are only so many codes coders can hold in their working memory. The more codes there are in the codebook, the lower inter-coder agreement is likely to be. Limits of 30-40 (MacQueen, McLellan, Kay, & Milstein, 1998) and even 20 (Hruschka et al., 2004) codes have been recommended to facilitate agreement, which, depending on the size of the dataset, effectively rules out more fine-grained and nuanced coding. For us, this reveals a key problematic of approaches to coding that prioritize consensus (inter-coder agreement): uniformity is valued over depth of insight.

38
New cards

What is a third critique of small q TA approaches, such as coder reliability?

Given a close conceptual connection between codes and themes, coding reliability approaches also often produce themes that are relatively superficial and - in our view - ‘underdeveloped’ (Connelly & Peltzer, 2016). Such underdeveloped ‘themes’ are, in practice, often simply topic summaries, with each providing a summary of patterns in participants’ responses to a particular topic. These topics sometimes map closely onto data collection questions. As an example, the following three ‘themes’ were reported in a focus group study of anorexia patients’ perspectives on a group intervention for perfectionism: (1) perceived benefits of the group; (2) nature/content of the group; (3) suggested improvements. These themes very closely mapped onto the questions asked in the focus groups (Larsson, Lloyd, Westwood, & Tchanturia, 2018). Given a close mapping between themes and data collection questions, with these as topic summaries, it is not surprising that a “good consensus” between two independent coders was achievable. But what potential depth and richness of meaning is lost?

39
New cards

What is a fourth critique of small q TA approaches, such as coder reliability?

From Braun and Clarke’s Big Q perspective, there are many troublesome practices in small q TA, but perhaps the most troubling is when the values that underpin it are not articulated or acknowledged. For some, any differences between qualitative and quantitative data/analysis are not ‘chasms’ so much as gentle brooks, not paradigmatically incommensurate. Guest et al., for instance, argued that it is “not true” that “qualitative research methods are difficult to reconcile with a positivist approach” (2012). Key coding reliability TA authors have described TA as an approach that can ‘bridge the divide’ between quantitative and qualitative (or positivist and interpretative) research (Boyatzis, 1998; Guest et al., 2012; Hayes, 1997; Luborsky, 1994). What such definitions rely on (sometimes implicitly) is one type of qualitative orientation, with the ‘chasm’ (Reicher, 2000) between different orientations within qualitative research elided. In our view, TA ‘bridges the divide’ by relying on a limited and indeed impoverished definition of qualitative research, as tools and techniques (small q), rather than an expanded definition, in which qualitative research provides both a philosophy and techniques for research (Big Q). Indeed, most small q authors implicitly conceptualize TA within realist and experiential (‘emphatic’) frameworks, while the possibility of critical (‘suspicious’) orientations (Willig, 2017) is not even acknowledged. What particularly troubles Braun and Clarke is that the limited conception of qualitative inquiry that underpins small q TA is not acknowledged as limited, or often even as situated and partial.

40
New cards

What is a fifth critique of small q TA approaches, such as coder reliability?

But more broadly, what Braun and Clarke are very troubled about is by the way good practice in small q TA is often equated with good practice in TA and qualitative research more generally. For instance, through the inclusion of multiple coders in qualitative ‘quality checklists’ (O’Connor & Joffe, 2020). Ryan and Bernard (2003) claimed analytic validity “hing[es] on the agreement across coders”, and noted “strong intercoder agreement also suggests that the concept is not just a figment of the investigator’s imagination” (p. 104). In promoting inter-coder reliability as an implicitly or explicitly ‘universal’ - rather than paradigm-embedded - marker of quality, researcher subjectivity becomes problematically conflated with poor quality analysis, and the inference that subjective knowledge is necessary flawed. However, it is not! We suspect this embracing of such ‘good practice’ guidelines reflects their easy-alignment with the positivist-empiricism that dominates much methodological training, and the wider under-valuing and lack of training of qualitative research across many disciplines. Hence, it is problematic to assume coding reliability quality measures are relevant for all types of TA; they are not! What seems to underlie the equation of coding reliability with good quality coding, in all forms of TA, is a failure to appreciate the divergent paradigmatic assumptions that underpin different TA approaches, and a general lack of understanding of the qualitative paradigm. From Braun and Clarke’s standpoint as Big Q researchers, small q approaches do not allow for the very things that are essential to producing good quality qualitative analysis. However, there is a logic to small q TA that is inconsistent with its postpositivist leanings, even if these are rarely explicitly acknowledged.

41
New cards

What are codebook approaches to TA or Medium Q TA?

There are several approaches to TA that sit somewhere between reflexive and coding reliability varieties, and combine values from a qualitative paradigm with more structured coding and theme development processes. Reflecting this combination, Braun and Clarke suggest such approaches are effectively a ‘MEDIUM Q’ approach, and use the term ‘codebook’ to collectively describe them. Like coding reliability, the analytic process centers around the development of some kind of codebook or coding frame and so involves a more structured, less open and organic approach to analysis than in reflexive TA. Coding reliability is, however, not encouraged - or even discouraged (see N. King, 2016) - and thus researcher subjectivity is not problematized, but recognized and even valued. The codebook familiarization and coding, serves as a tool to guide data coding and/or a way after (some) data charting the coded data. Again, themes are often conceptualized as ‘analytic inputs’, identifiable early in the analytic process - even if evolution of these is possible.

42
New cards

What are the cluster of approaches within codebook reliability called?

This cluster of approaches often goes by names other than ‘thematic analysis’, including matrix analysis (e.g. Cassell & Nadin, 2004; Miles & Huberman, 1994), framework analysis (e.g. Ritchie & Spencer, 1994), network analysis (e.g. Attride-Stirling, 2001) and template analysis (e.g. N. King, 1998). Braun and Clarke recognize that these different approaches vary, but to them, the differences between these approaches are less significant than what separates them from other approaches in their tripartite TA differentiation.

43
New cards

What is ‘template analysis’ within codebook reliability?

Template analysis was developed by British psychologist Nigel King and colleagues (Brooks et al., 2015; N. King, 1998, 2004, 2012, 2016; N. King & Brooks, 2016, 2018), drawing on the work of Crabtree and Miller (1999). It is framed as a ‘middle ground’ approach to TA (N. King, 2012). Template analysis offers a set of techniques rather than a methodology. Template analysis is positioned as theoretically ‘independent’ and flexible, and able to be used across realist and critical/’subtle’ realist (Hammersley, 1992) approaches (Brooks et al., 2015), as well as with (some) constructionist approaches to qualitative research. King as suggested that template analysis is suitable for use in constructionist research focused on broader discursive patterns (N. King, 2012; N. King & Brooks, 2018).

44
New cards

What is an aspect of ‘template analysis’ within codebook reliability?

Some aspects of template analysis seem to retain the ‘postpositivist sensibility’ that informs small q TA. Template analysis ostensibly combines a “high degree of structure” and flexibility (N. King, 2012, p. 426), although flexibility here is constrained compared to reflexive TA. The main differences between reflexive TA and template analysis center on how codes and themes are conceptualized and how the analytic process unfolds. In template analysis, codes can be descriptive and interpretative (N. King, 2004), a distinction that echoes that between manifest/descriptive and latent/conceptual coding in other types of TA. Codes are effectively tools for the identification of themes. The analytic process centers on the development of a coding frame - the template - and the generation of a final hierarchal coding/thematic structure through the application and refinement of the template in relation to the whole dataset. The template offers a way of hierarchically mapping patterned meaning, and moving from broader to more precise meanings; multiple layers evidence refinement. One trap for novice researchers is to become overly focused on the details of the template; King (2012) has cautioned that the template is a tool for, and not the purpose of, analysis.

45
New cards

What is another aspect of ‘template analysis’ within codebook reliability?

Although the template can be generated entirely inductively, template development usually combines deductive and inductive processes. So-called a priori codes might be identified ahead of data engagement, as ‘anticipated themes’ developed from literature or interview questions. Coding of usually a subset of the dataset also contributes to developing an initial template or a priori codes/themes (Brooks et al., 2015). ‘Openness’ comes in as the template is developed and refined through full dataset coding. This approach has been described as offering efficiency, especially when working with larger dataset (N. King, 2004) and some facilitative structure to those new to qualitative analysis. With this dual a priori and data-based theme development approach, themes sit somewhere on a spectrum from analytic inputs to analytic outputs. In fitting with a qualitative paradigm, King (2012) has cautioned that themes are not hidden in the data waiting to be found, and are not independent of the researcher. Yet there may be a risk of settling themes too early on, and using coding simply as the means to identify those. Unlike reflexive TA, template analysis does not involve two levels of analytic work - coding and theme development - and two distinct analytic entity outputs - codes and themes. Rather the terms code and theme are often used interchangeably, echoing the conflation of these terms in small q TA.

46
New cards

Do Braun and Clarke perceive any problems with ‘template analysis’?

In the conceptualization of template analysis, no. The structured-but-flexible approach may offer a useful entry-point into qualitative-values-informed TA, especially for researchers who are working in teams of mixed experience or are new to qualitative research. It is advocated as useful for applied research (Brooks & King, 2012), which may sometimes have such research team characteristics. The differences between template analysis and reflexive TA may reflect the applied psychology roots of template analysis, and the neo-positivist (Duberlet, Johnson, & Cassell, 2012) assumptions common in applied fields (Clarke & Braun, 2018). That said, there are elements of the approach that we feel risk foreclosing analysis, and undermining depth of engagement, and thus the potential of the method to deliver rich, nuanced analysis of the topic at hand.

47
New cards

What is one risk of ‘template analysis’?

There is a risk that an emphasis on developing apriori codes/themes - especially if strongly connected to data collection questions - reduces open, organic interpretation and thus results in foreclosure of analysis. Themes potentially become conceptualized as analytic inputs that evidence is sought for, rather than products of the analytic process, where understanding of patterning and relationships deepens and changes through the analytic process. We specifically identify the use of interview questions as ‘themes’ as an instance of a weak analysis - because no analytic work has been undertaken (Braun & Clarke, 2021c). There is also a risk that instead of themes, what is produced are topic summaries of data collection questions. For example, the ‘themes’ reported in template research on UK managers’ conceptions of employee training and development (McDowall & Saunders, 2010, p. 617) - “conceptualizations of training and development”; “training and development decisions”; “evaluation of outcomes”; “relationship between training and development” - closely reflect the headings in the interview schedule.

48
New cards

What is another risk of ‘template analysis’?

Finally, there is a risk that the meaning focus of Big Q research is lost through an over-emphasis on mapping structure and hierarchy. From Braun and Clarke’s perspective, many different levels around themes can undermine a rich nuanced understanding. Although this is advocated for in template analysis as a way to capture richness, it risks evoking the production of a quantitative-relational-model, where the focus is on relationships between ‘variables’, rather than on depth of meaning. In reflexive TA, the production of many themes/levels usually reflects a superficial analytic process and a failure to identify underlying patterns and concepts (Braun & Clarke, 2013). Therefore, Braun and Clarke’s concern with template analysis is that if it’s treated as a technique to apply, without a good understanding of overall qualitative values and processes, it risks a thin and underdeveloped analysis (Connelly & Peltzer, 2016).

49
New cards

What is ‘framework analysis’ within codebook reliability?

Framework analysis offers another example of a codebook approach that is similar to TA (Gale, Heath, Cameron, Rashid, & Redwood, 2013; J. Smith & Firth, 2011). Developed in the UK in the 1980s with an initial focus on applied policy research (Ritchie & Spencer, 1994; see also Ritchie, Spencer, O’Connor, 2003; Srivastava & Thompson, 2009), the method has been described as particularly useful for research where the objective is clear and known in advance, timeframes are right, a dataset is large, and it is conducted in teams, including teams with varying levels of qualitative research experience (Parkinson et al., 2016). The method emphasizes the importance of ‘audit trails’ to map analytic development - a widely used qualitative quality measure and for analytic transparency (Gale et al., 2013; Leal et al., 2015; Ritchie & Spencer, 1994). Indeed, other forms of TA have been critiqued for ‘subjective’ results and a lack of transparency in how the themes were produced, as well as for taking data out of context and the potential for misinterpretation (J. Smith & Firth, 2011). This latter critique connects to what is a key differentiator of this particular approach from other TA approaches: retaining focus on individual cases within an approach that focuses on themes across cases. Using the framework, researchers can compare data not just across, but also within, cases. This dual focus is identified as a key strength of framework analysis (Gale et al., 20130.

50
New cards

What is a key characteristic of ‘framework analysis’?

The method’s key characteristic is the development of a (data reduction) framework. Like the template in template analysis, this framework is a tool for analysis, not the endpoint of analysis. These form the basis for this framework. Like other codebook approaches, themes - often closer to topic summaries than shared meaning themes - are developed fairly early in the analytic process from (some) data familiarization, and the data collection topics/questions. Coding is primarily a process for identification of ‘themes’. The framework is a data matrix with rows (cases or data items, such as an individual interview) and columns (codes) making cells of summarized data (Gale et al., 2013). The coding frame (data matrix) developed from this process is then applied to the dataset, indexing instances of themes. Once the framework is finalized, processes of mapping (of themes, relationships, etc.) and interpretation complete the process. Researchers need to hold in mind a clear distinction between “‘identifying a framework’ (for the purpose of sifting and sorting) and ‘mapping and interpretation’ (for the purpose of making sense and understanding)” (Parkinson et al., 2016, p. 117).

51
New cards

How can ‘framework analysis’ be utilized?

Framework analysis is not aligned with a particular theoretical approach (Gale et al., 2013; Parkinson et al., 2016) and can be utilized inductively or deductively - although induction is often circumscribed by highly focused aims and objectives (see also N. King, 2012; J. Smith & Firth, 2011).

52
New cards

Do Braun and Clarke perceive any problems with ‘framework analysis’?

Again, Braun and Clarke state that the simple answer is no. Framework analysis is clearly a good ‘fit’ with the purpose and goals of applied policy research, especially if the outcomes of research are clearly defined in advance of data analysis. For instance, it has been used to address very practical and concrete questions related to the implementation of particular policy and to identify the factors that were helping or hindering that process (e.g. around care for people with intellectual disabilities and mental health problems; M. Kelly & Humphrey, 2013). With focused and applied questions, the structured approach undoubtedly has pragmatic appeal as it offers clear guidelines to achieve your research purpose. But Braun and Clarke do have cautions about what some aspects of this method might evoke, both in process and outcome, from a Big Q perspective. The method itself has been acknowledged as a compromise of some qualitative principles by key authors (Ritchie & Lewis, 2003). For those doing research in social and health sciences areas, we suspect framework analysis is appealing because of its neo-positivist elements (the codebook; a topic summary approach to themes).

53
New cards

What concerns do Braun and Clarke have about ‘framework analysis’?

Braun and Clarks have questions about translation of this method out of the policy analysis context to health and other social research, and how it might delimit a fuller Big Q qualitative research practice in those fields. We share Parkinson et al.’s concerns that the structured analytic process may inadvertently encourage researchers to view the analytic stages as “mechanical steps to follow”, producing “unthinking’” data engagement (2016, p. 125). Parkinson et al. raised a potentially analytically important difference, between the qualitative data gathered in policy/healthcare research and data in psychological (and related social science) research. The former fields tend to work with data that are “more concrete or factual”. Psychological data often focus on “experience, narrative or discourse”, aspects which are less suited to a structured analytic approach. With the latter, they found “that the less clear, more ambiguous and subjective aspects of the data could not be summarized as easily during the indexing stage” (2016, p. 128).

54
New cards

What are some challenges with using codebook

From Braun and Clarke’s perspective, codebook approaches, including template analysis and framework analysis, carry risks of mechanizing and delimiting analytic and interpretative processes, in a way which constrains the Big Q potential of these approaches (as some authors acknowledge; Ritchie & Lewis, 2003). These risks are not inherent in the method, but perhaps enables by the structure(s) they offer to support analytic development - particularly for researchers who might not have good conceptual, paradigmatic, or theoretical grounding in qualitative research values (as might be the case with some applied research teams). Codebook approaches are commonly used in applied research - indeed have often been developed specifically for applied research - and sometimes are promoted as the best TA method for applied research. But if these approaches do (inadvertently) produce topic summaries more than conceptually founded patterns of meaning, translation into actional outcomes - with clear implications for practice - is arguably compromised (see Connelly & Peltzer, 2016; Sandelowski & Leeman, 2012).

55
New cards

Do Braun and Clarke believe that ‘reflexive TA’ is the best approach to TA?

Of course Braun and Clarke thinks it’s a good approach, but it only makes sense to use it if your research values align with the values underpinning reflexive TA. Our overriding concern is that whatever TA approach people use, it aligns with their research values and they use it knowingly.

56
New cards

What the ‘reflexive’ approach (Braun & Clarke, 2006) offers and what can go wrong with this approach?

What it offers:

  • Theoretical flexibility (but Big Q).

  • Potential for analysis from inductive to deductive.

  • Works well from experiential to critical approaches.

  • Open and iterative analytic process, but with clear guidelines.

  • Development of analytic concepts from codes to themes.

  • Easy to learn but requires a ‘qualitative mindset’ and researcher reflexivity.

  • Works especially well for a single researcher; can be used with team researchers.

  • Works with wide range of datasets and participant group sizes.

What can go wrong:

  • Failure to discuss theory to locate the use of the method.

  • Analysis not grounded in qualitative values, or in broader theoretical constructs.

  • Failure to explicate the particular way(s) the method has been used.

  • Use of topic summaries instead of themes.

  • A too fragmented and particularized analysis, presenting many themes and a complex thematic structure without depth of interpretation.

  • Absence of interpretation; simply descriptive summaries.

57
New cards

What the ‘coding reliability’ approach (Boyatzis, 1998) offers and what can go wrong with this approach?

What it offers:

  • Theoretical flexibility (but small q).

  • Potential for analysis from inductive to deductive (but fully inductive rare).

  • Focused on experiential approaches.

  • A highly structured analytic approach, which might seem reassuring to new qualitative researchers.

  • Potential to ‘speak within’ the language of postpositivist quantitative research.

  • Potential for ‘hypothesis testing’ (deductive TA).

  • Requires research team (more than one coder) for reliability.

What can go wrong:

  • Failure to discuss (big) theory or conceptual frameworks for the analysis.

  • Inconsistency of judgement in coding data.

  • ‘Bias’ from the researcher affecting the coding and identification of themes.

  • No data interpretation; simply descriptive summaries.

58
New cards

What the ‘coding’ approach (Template Analysis; N. King, 2012) offers and what can go wrong with this approach?

What it offers:

  • Theoretical flexibility.

  • Can be used inductively or deductively, but typically occupies a middle-ground between these.

  • Particularly suited for experiential approaches, but can be used critically.

  • Structured and systematized but flexible, techniques for data analysis can be helpful for new qualitative researchers.

  • Structured process offers some efficiency in analysis.

  • Useful for exploring perspective of different groups.

  • Can be used by single researchers or teams.

  • Fairly easy to learn.

  • Works well with larger datasets.

What can go wrong:

  • Failure to discuss theoretical or methodological orientation.

  • Codebook (the template) treated as purpose of analysis; lack of development of themes during data engagement.

  • Overemphasis on (hierarchical) thematic structure at the expense of depth of meaning.

  • No data interpretation; simply descriptive summaries.

59
New cards

What is ‘thematic coding’?

Thematic coding was particularly common before TA was widely recognized as a distinctive method, but remains used and discussed in methodological texts (e.g. Ayres, 2008; Flick, 2018; Rivas, 2018). Like TA more broadly, thematic coding has often been presented as a generic technique for qualitative analysis, that informs many different analytic approaches, centered on the development of “a framework of thematic ideas” (Gibbs, 2007, p. 38) about data. Most authors describe more inductive and more deductive orientations as possible, and encourage (some) openness around coding and the evolution of analytic ideas throughout the coding process. Rivas (2018) emphasized an increasingly interpretive approach as the analysis progresses. Some make a distinction between data-driven (descriptive) codes - close to the respondent’s terms - and concept-driven (analytic and theoretical) codes (e.g. Gibbs, 2007). Gibbs emphasized the importance of a flexible approach and warned researchers “not to become too tied to the initial codes you construct” (2007, p. 46).

s

60
New cards

What does ‘thematic coding’ involve?

German psychologist Uwe Flick’s description of thematic coding (e.g. Flick, 2014a, 2018) bears similarity to codebook types of TA, with some delimiting of analytic focus before the analysis, and an emphasis on analysis at the level of individual data items in turn, before moving to developing an overall thematic patterning. The method typically involves the use of ground theory techniques, sometimes in combination with techniques from TA, to develop themes from data (the purpose of TA) - rather than the categories and concepts associated with grounded theory analysis (see Birks & Mills, 2015; Charmaz, 2014). Thematic coding is definitely not intended to develop a grounded theory, but grounded theory techniques like line by line coding, constant comparison, and memo writing feature in various accounts of thematic coding. Its use thus evokes the use of ‘grounded theory’ techniques to do TA that we have critiqued elsewhere (Braun & Clarke, 2006, 2013).

61
New cards

How should researchers go about using TA?

Does it matter that researchers use techniques and concepts from grounded theory to produce something akin to TA, especially now, when there is a well-developed range of methods for TA? Braun and Clarke think it does, but it is important to interrogate such reactions. We should ask: is suggesting people ‘should use TA to do TA’ actually methodolatry (Chamberlain, 2000), a preoccupation with the purity of method, or proceduralism, “where analysis focus on meticulously following set procedures rather than responding creatively and imaginatively to data” (N. King & Brooks, 2018, p. 231)? Braun and Clarke don’t think it is, because they concur with such critiques. Promoting rigid rules for good practice, insisting on ‘one true way’ of applying a method, thereby avoiding thinking, theory and taking reflexivity seriously, can lead to qualitative research that is rather limited: often merely descriptive/summative; with little or no interpretation of the data; implicitly peppered with postpositivism; and lacking depth of engagement, thoughtfulness and creativity. Braun and Clark advocate for creativity, and thoughtful reflexive practice in qualitative research (Braun, Clarke, & Hayfield, 2019). But they also advocate for clarity, and for knowing why you’re doing what you do. With thematic coding, they have concern when the use of grounded theory to do ‘TA’ does not evidence a knowingness, and when there isn’t clear and strong rationalization for why these (grounded theory) procedures are used to generate themes rather than the categories and concepts associated with a grounded theory. Productive debate about method requires that researchers use techniques knowingly and are able to clearly articulate the assumptions underpinning them (Braun & Clarke, 2021c). Braun and Clarke are not yet convinced of the value of thematic coding and what it offers that is distinct from TA.

62
New cards

What is ‘qualitative evidence synthesis’ or specifically ‘thematic synthesis’?

Thematic synthesis draws on TA and is the process of integrating the results of multiple qualitative studies (J. Thomas & Harden, 2008). Broadly speaking, it is part of the systematic review tradition (a type of literature review following a specific protocol and methodology; see Boland, Cherry, & Dickson, 2017; Higgins & Green, 2011). The synthesis involves the amalgamation of qualitative research reports (‘primary research reports’) that relate to a specific topic, by identifying the key concepts that underpin several studies in a particular area of research.

63
New cards

What do some believe distinguishes ‘qualitative evidence synthesis’ from the more ‘traditional narrative literature reviews’?

Some argue that what distinguishes qualitative evidence synthesis from the more traditional narrative literature reviews is that the synthesis ‘goes beyond’ the content of the original studies to make “a new whole out of the parts” (Cruzes & Dyba, 2011). It does so by providing new concepts, theories or higher-level interpretation. A distinction is often made between thematic synthesis and TA - the latter involves the identification of important or common themes in a body of research and summarizing these under thematic headings (e.g. Garcia et al., 2002).

64
New cards

How can ‘TA’ be used as a tool for qualitative evidence synthesis?

Like TA as a primary research method, TA as a tool for qualitative evidence synthesis can be ‘inductive’, grounded in themes identified in the literature, or deductive, evaluating particular themes through an investigation of the literature (Dixon-Woods, Agarwal, Jones, Yound, & Sutton, 2005).

65
New cards

What is ‘polytextual TA for visual data analysis?

Kate Gleeson (2011, 2021) developed what she called polytextual thematic analysis for analyzing visual data, drawing on Hayes’ (1997, 2000) reflexive approach to TA. Her approach was firmly Big Q and aimed to capture recurring patterns in the form and content of visual images. The process she outlined centered on “viewing the pictures repeatedly while reading and considering various cultural images and texts that enable their interpretation” (2011, p. 319).

66
New cards

What is Gleeson’s 11 step process of using polytextual TA for visual data analysis?

  1. Viewing the images repeatedly and noting potential (proto-) themes and the features of the image that evoke the themes;

  2. Reflecting on the effects the images have on you and describing;

  3. For all recurrent proto-themes, compiling the relevant images and reflecting on whether the theme is distinct:

  4. Writing a brief definition of the proto-theme;

  5. Identifying all instances of the proto-theme across the data items;

  6. Once again, compiling relevant material for each proto-theme, revising the definition of the theme if necessary and considering elevating the proto-theme to a theme (NB: this means it has been repeatedly checked and considered; it does not mean it is fixed or finalized), compiling your notes on the elements of the various images that best illustrate each theme;

  7. Continuing to identify themes until no further distinctive themes are developed;

  8. Reviewing the theme definitions and considering their distinctiveness, redefining themes if necessary to highlight the distinctiveness of each theme;

  9. Exploring whether the themes cluster together to form higher order themes;

  10. Defining the higher order themes and considering all these in relation to them;

  11. Finalizing the themes that best address the research question and will constitute the basis of the write-up.

67
New cards

How is Gleeson’s ‘polytextual TA’ approach similar to Braun and Clarke’s ‘reflexive TA’ approach?

Conceptually and methodologically, Gleeson’s approach seems very similar to reflexive TA, as it encompasses:

  • Data familiarization;

  • Different levels of analytic engagement (themes and higher order themes - Braun and Clarke thinks these broadly map on to codes and themes in their reflexive TA approach);

  • Clustering lower order analytic units into higher order analytic units;

  • Writing theme definitions;

  • Processes of review (in relation to the coded data);

  • And reflection on the relationship between themes, and the distinctiveness of each theme.

68
New cards

What does ‘visual TA’ allow the research to develop?

Visual TA allows the researcher to develop different ways of thinking about the social world and our experience of it. We can build thematic models that may illuminate concepts that are not as clearly evidenced in other approaches. On its own, or as part of a qualitative approach using multiple methods, the visual has a key role to play in social research.

69
New cards

What is ‘Off-Label’ TA or in otherwards, combining thematic analysis with other approaches?

Braun and Clarke briefly note the increasingly common ‘bashing-up’ of TA with other analytic approaches, to produce distinct ‘hybrid’ approaches. In addition to engaging in ‘academic bricolage’ (Kincheloe, 2001) blending TA with elements of discourse analysis (see Box 8.5), researchers are combining TA with a range of other approaches, including idiographic approaches such as narrative analysis (e.g. Palomaki, Laakasuo, & Salmela, 2013; Ronkainen, Watkins, & Ryba, 2016) and case study research (e.g. Cedervall & Aberg, 2010; Gadberry, 2014; Manago, 2013).

70
New cards

Can ‘methodological elements be combined?

Methodological elements can be combined and novel approaches can be taken in order to advance and improve existing methodological approaches. We do not dissuade such hybridization. Rather, we argue that it needs to occur knowingly and purposefully and be rooted in a sound understanding and reporting of the compatibility of different philosophical underpinnings and practical application (Bradbury-Jones et al., 2017, p. 11). For Braun and Clarke, what’s important is that such ‘mash-ups’ and methodological mixes represent intentional, reflexive choices on the part of the researcher, not the unknowing combining of different types of TA, or TA and grounded theory procedures (see Braun & Clarke, 2021c). Theoretically-unknowing blending is problematic because it can compromise the application of methodological rigor to data analysis and challenge the validity of qualitative research findings. The uncritical blending of methods threatens to result in a product that Morse (1991) describes as a “sloppy mismash” rather than good science (DeSantis & Ugarriza, 2000, p. 351). So perhaps the take-home message is a questions and an answer: (Q) can I ‘break the rules’ and do things differently with TA? (A) yes, but if you do, make sure you do so knowingly. This is an adventure, but you need a solid (theoretical) foundation for knowing which ‘rules’ can be broken, and which are important to retain. Innovation for innovation’s sake isn’t the point. A robust process that will generate important and rich(er) understandings is the point.