Methodology notes
This document is an overview of methodological topics and concerns. It is a place where I think through and justify my methodological decisions, and identify the methods and procedures through which I implement them.
Significant Concepts and Frameworks
Multicase Studies
These notes describe the features, affordances and limitations of case study research, and articules factors correspoding with variable kinds of case studies.
I do notice a distinction between two schools of thought, which seem to be spearheaded by Stake and Yin. I generally favour Stake’s flexible approach, and it seems well aligned with other methodological works I’ve been reading (e.g. Abbott 2004; Ragin and Becker 1992).
Stake’s Approach
In case-study research, cases represent discrete instances of a phenomenon that inform the researcher about it. The cases are not the subjects of inquiry, and instead represent unique sets of circumstances that frame or contextualize the phenomenon of interest (Stake 2006: 4-7).
Cases usually share common reference to the overall research themes, but exhibit variations that enable a researcher to capture different outlooks or perspectives on matters of common concern. Drawing from multiple cases thus enables comprehensive coverage of a broad topic that no single case may cover on its own (Stake 2006: 23). In other words, cases are contexts that ascribe particular local flavours to the activities I trace, and which I must consider to account fully for the range of motivations, circumstances and affordances that back decisions to perform activities and to implement them in specific ways.
Moreover, the power of case study research derives from identifying consistencies that relate cases to each other, while simultaneously highlighting how their unique and distinguishing facets contribute to their representativeness of the underlying phenomon. Case study research therefore plays on the tensions that challenge relationships among cases and the phenomenon that they are being called upon to represent (Ragin 1999: 1139-1140).
Stake (2006: 4-6) uses the term quintain1 to describe the group, category or phenomenon that bind together a collection of cases. A quintain is an object, phenomenon or condition to be studied – “a target, not a bull’s eye” (Stake 2006: 6). “The quintain is the arena or holding company or umbrella for the cases we will study” (Stake 2006: 6). The quintain is the starting point for multi-case research.
According to Stake (2006: 6):
Multicase research starts with the quintain. To understand it better, we study some of its single cases — its sites or manifestations. But it is the quintain we seek to understand. We study what is similar and different about the cases in order to understand the quintain better.
Stake (2006: 8) then goes on:
When the purpose of a case is to go beyond the case, we call it an “instrumental” case study When the main and enduring interest is in the case itself, we call it “intrinsic” case study (Stake 1988). With multicase study and its strong interest in the quintain, the interest in the cases will be primarily instrumental.
Abbott’s (2004: 22) characaterization of Small-N comparison is very reminiscient of Stake’s (2006) account of the case-quintain dialectic:
Small-N comparison attempts to combine the advantages of single-case analysis with those of multicase analysis, at the same time trying to avoid the disadantages of each. On the one hand, it retains much information about each case. On the other, it compares the different cases to test arguments in ways that are impossible with a single case. By making these detailed comparisons, it tries to avoid the standard critcism of single-case analysis — that one can’t generalize from a single case — as well as the standard criticism of multicase analysis — that it oversimplifies and changes the meaning of variables by removing them from their context.
It should be noted that case study research limits my ability to define causal relationships or to derive findings that may be generalized across the whole field of epidemiology. This being said, case study research allows me to articulate the series of inter-woven factors that impact how epidedemiological researchers coordinate and participate in data-sharing initiatives, while explicitly accounting for and drawing from the unique and situational contexts that frame each case.
Stake (2006: 23) recommends selecting between 4-10 cases and identifies three main criteria for selecting cases:
- Is the case relevant to the quintain?
- Do the cases provide diversity across contexts?
- Do the cases provide good opportunities to learn about complexity and contexts?
For qualitative fieldwork, we will usually draw a purposive sample of cases, a sample tailored to our study; this will build in variety and create opportunities for intensive study (Stake 2006: 24).2
Stake’s (2010: 122) prioritizes doing research to understand something or to improve something, and I generally agree with his rationalization; research helps reframe problems and establish different decision options.
Yin’s Approach
According to Yin (2014: 16), “a case study is an empirical inquiry that investigates a contemporary phenomenon (the ‘case’) in depth and within its real-world context, especially when the boundaries between phenomenon and context may not be clearly evident.”
He goes on to document some features of a case study: “A case study inquiry copes with the technically distinctive situation in which there will be many more variables of interest than data points, and as one result relies on multiple sources of evidence, with data needing to converge in a triangulating fashion, and as another result benefits from the prior development of theoretical propositions to guide data collection and analysis.” (Yin 2014: 17)
Yin (2014) is more oriented toward what he refers to as a realist perspective, which he pits against relativist and interpretivist perspectives (used interchangably, it seems), and which I might refer to as constructivist. He characterizes relativist perspectives as “acknowledging multiple realities having multiple meanings, with findings that are observer dependent”. His prioriting of a realist approach corresponds with the analysis by Yazan (2015), who compared Yin with Stake and Merriam. According to Yazan (2015: 137), Yin evades making statements about his epistemic commitments, and is characterized as post-positivist.
Yin (2014) is very concerned with research design in case study research He posits that, in a colloquial sense, “a research design is a logical plan for getting from here to there, where here may be defined as the initial set of questions to be answered, and there is some set of conclusions (answers) about these questions.” (Yin 2014: 28)
Yin distinguishes between a research design and a work plan. A research design deals with a logical problem, whereas a work plan deals with a logistical problem. Seems reminiscient of Brian Cantwell Smith’s distinction between skeletons and outlines.
Yin lists five components of a research design:
- A case study’s questions;
- its propositions, if any;
- its unit(s) of analysis;
- the logic linking the data to the propositions; and
- the criteria for interpreting the findings.
Interestingly, I have been instinctively following these steps, and am currently hovering somewhere between components 3 and 4, while dipping back to 2 once in a while too.
The problem of defining the unit of analysis is salient to me right now. According to Yin (2014: 32), the unit of analysis may change as the project progresses, depending on initial misconceptions (he uses the example of a unit of analysis changing from neighbourhoods to small groups, as contextualized by the socio-geographical entity of the neighbourhood, which is laden with issues of class, race, etc). In my own situation, the unit of analysis may hover between the harmonization initiative, the people, activities or infrastructures that make it work.
In the section on criteria for interpreting the findings, Yin emphasizes the role of rival theories, which is akin to a concern with falsifiability as a means of validating truth claims, and which betrays his positivist leanings. This may be compared with Stake’s emphasis on triangulation, which is more concerned with internal cohesiveness. Similarly, Yin cites Corbin and Strauss regarding the role of theory or theoretical propositions in research design, which similarly reveals a concern with rigorous upfront planning and strict adherence to research design as a key aspect of deriving valid findings.
Regarding generalizability, Yin (2014: 40-41) states that “Rather than thinking about your case as a sample, you should think of it as the opportunity to shed empirical light about some theoretical concepts or principles, not unlike the motive of a laboratory investigator in conceiving of and then conducting a new experiment.” He goes on to state that case studies tend to strive for analytic generalizations that go beyond the specific case that has been studied, and which apply to other concrete situations rather than just abstract theory building.
Logistics of case study design
Preparing to select case study data
Yin (2014: 72-73) identifies five desired attributes for collecting case study data:
- Ask good questions — and interpret answers fairly.
- “As you collect case study evidence, you must quickly review the evidence and continually ask yourself why events or perceptions appear as they do.” (73)
- A good indicator of having asked good questions is mental and emotional exhaustion at the end of each fieldwork day, due to the depletion of “analytic energy” associated with being attention on your toes. (73-74)
- Be a good “listener” not trapped by existing ideologies or preconceptions.
- Sensing through multiple modalities, not just spoken words.
- Also subtext, as elicited through choices of terms used, mood and affective components. (74)
- Stay adaptive, so that newly encountered situations can be seen as opportunities, not threats.
- Remember the original purpose but willing to adapt to unanticipated circumnstances. (74)
- Emphasize balancing adaptability with rigour, but not with rigidity. (75)
- Have a firm grasp of what is being studied, even when in an exploratory mode.
- Need to do more than merely record data, but interpret information as they are being collected and to know immedately whether there are contradictions or complementary statements to follow-up on. (75-76)
- Avoid biases of being sensitive to contrary evidence, also knowing how to conduct research ethically.
- Maintain strong professional competence, including keeping up with related research, ensuring accuracy, striving for credibility, and knowledging and mitigating against bias.
Yin advocates for adoption of case study protocols. He provides an example of a table of contents for case study protocols, which generally comprise four sections:
- Overview of the case study
- Data collection procedures
- Data collection questions
- Guide for the case study report
Triangulation
Triangulation is a process of gaining assurance. Also sometimes called crystallization.
“Each important finding needs to have at least three (often more) confirmations and assurances that key meanings are not being overlooked.” (Stake 2006: 33) Triangulation is a process of repetitous data gathering and critical review of what is being said. (Stake 2006: 34)
What needs triangulation? (Stake 2006: 35-36)
- If the description is trivial or beyond question, there is no need to triangulate.
- If the description is relevant and debatable, there is much need to triangulate.
- If the data are critical to a main assertion, there is much need to triangulate.
- If the data are evidence for a controversial finding, there is much need to triangulate.
- If a statement is clearly a speaker’s interpretation, there is little need to triangulate the quotation but not its content.
Stake (2006: 37) cites Denzin (1989) who highlighted several kinds of triangulation, leading to a few advisories:
- Find ways to use multiple rather than single observers of the same thing.
- Use second and third perspectives, i.e. the views of teachers, student and parents.
- Use more than one research method on the same thing, i.e. document review and interview.
- Check carefully to decide how much the total description warrants generalization.
- Do your conclusions generalize across other times or places?
- Do your conclusions about the aggregate generalize to individuals?
- Do findings of the interaction among individuals in one group pertain to other groups?
- Do findings of the aggregate of these people generalized to a population?
Cross-Case Analysis Procedure
Stake (2006: Chapter 3) lays out a procedure for deriving synthetic findings from data collected across cases. He frames this in terms of a dialectic between cases and quintains. He identifies three tracks (Stake 2006: 46):
- Track 1: Maintains the case findings and the situationality.
- Track 2: Merges similar findings, maintaining a little of the situationality.
- Track 3: The most quanitative track, shifts the focus from findings to factors.
According to Stake, case reports should be created independently and then brought together by a single individual when working in a collaborative project. In keeping with the case-quintain dialectic, this integration must involve strategically putting the cases aside and bringing them back in to identify convergences and divergences, similarities and differences, normalitities and discrepancies among them.
There is some detailed discussion about different kinds of statements, i.e. themes, findings, factors and assertions, but I find this a bit too much detail for me to get at at this point in mymethodological planning. In general though, Stake documents a process whereby an analyst navigates back and forth between the general and the situational, presenting tentativr statements that are shored up, modified or discarded through testing compatability of the evidence across cases.
Single cases
Stake (2000) is concerned with identifying what can be learned from a single case. He (2000: 437) identifies three kinds of cases:
- Intrinsic case studies as being driven by a desire to understand the particular case.
- Instrumental case studies are examined “mainly to provide insight into an issue or to redraw a generalization.”
- Collective case studies “investigate a phenomenon, population or general condition”.
Stake (2000) frames case research around a tension between the particular and the general, which echoes the case-quintain dilemma he described in (Stake 2006: 4-6).
Some scattered practical guidance
Stake (2006: 18-22) provides a detailed and realistic overview of common challenges involved in collaborative qualitative research. This could be handy in future work when planning a multicase project involving multiple researchers.
Stake (2006: 29-33) provides guidance on how to plan and conduct interviews in multicase research, including a series of helpful prompts and questions to ask yourself while designing the interview. One thing that stands out is his recommendation that an interview should be more about the interviewee than about the case. It’s necessary to find out about the interviewee to understand their interpretations, but what they reveal about the quintain is more important.
On page 34, Stake (2006) also provides some practical tips for documenting and storing data, after Huberman and Miles (1994).
Stake (2006: Chapter 4) includes a chapter on procedures for reporting the findings, and I may return to this later on once I need to initiative this phase of work. It addresses concerns about how to articulate comparisons, concerns about generalization, and how to handle advocacy based on findings.
See Stake (2006) Chapter 5 for a step-by-step overview of a multicase study analysis. The rest of the volume after that includes three very detailed examples from his own work.
Grounded theory
These notes are largely drawn from Charmaz (2000), which I understand to be a fairly balanced and comprehensive overview of the Glaser / Strauss and Corbin debate, and of the situation of specific methods and techniques in relation to these different stances. I also value Charmaz’s position as someone who subscribes to her constructivist approach.
According to Charmaz(2000: 509):
Essentially, grounded theory methods consist of systematic inductive guidelines for collecting and analyzing data to build middle-range theoretical frameworks that explain the collected data.
Charmaz(2000: 511) goes on to situate grounded theory in relation to what was the norm prior to its invention:
Glaser and Strauss’s (1967) work was revolutionary because it challenged (a) arbitrary divisions between theory and research, (b) views of qualitative research as primarily a precursor to more “rigorous” quantitative methods, (c) claims that the quest for rigor made qualitative research illegitimate, (d) beliefs that qualitative methods are impressionistic and unsystematic, (e) separation of data collection and analysis, and (f) assumptions that qualitative research could produce only descriptive case studies rather than theory development (Charmaz 1995).
Prior to Glaser and Strauss (1967), qualitative analysis was taught rather informally — they led the way in providing written guidelines for systematic qualitative data analysis with explicit procedures for data analysis (Charmaz 2000: 512)
Glaser brought his very positivist assumptions from his work at Columbia, and Strauss’ work in Chicago with Herbert Blumer and Robert Park infused a pragmatic philosophical approach to the study of process, action and meaning that reflects symbolic interactionism.
Glaser
Glaser’s position comes close to traditional positivism, with assumptions of an objective, external reality and a neutral observer who discovers data. and a reductionist form of inquiry of manageable research problems. According to Charmaz (2000: 511), regarding Glaser’s approach:
Theoretical categories must be developed from analysis of the collected data and must fit them; these categories must explain the data they subsume. This grounded theorists cannot shop their disciplinary stores for preconceived concepts and dress their data in them. Any existing concept must earn its way into the analysis. … The relevance of a grounded theory derives from its offering analytic explanations of actual problems and basic processes in the research setting. A grounded theory is durable because researchers can modify their emerging or established analyses as conditions change or further data are collected.
Corbin and Strauss
Strauss and Corbin assume an objective reality, aim toward unbiased data collection, propose a series of technical procedures, and espouses verification. However, they are postpositivism because they propose giving voice to their respondents,3 representing them as accurately as possible, discovering and reckoning with how their respodents’ views on reality differ from their own, and reflecting on the research process as one way of knowing.
Corbin and Strauss (1990) “gained readers but lost the sense of emergence and open-ended character of Strauss’s earlier volume and much of his empirical work. The improved and more accessible second edition of Basics (Strauss and Corbin 1998) reads as less prescriptive and aims to lead readers to a new way of thinking about their research and about the world.” (Charmaz 2000: 512)
Strauss apparently became more insistent that grounded theory should be more verificational in nature in personal communications.
Glaser (1992) responded to Strauss and Corbin (1990), repudiating what he perceived as forcing preconceived questions and frameworks on the data. Glaser considered it better to allow theory to “emerge” from the data, i.e. to let the data speak for themselves.
Charmaz identifies these two approaches as having a lot in common: hey both advocate for mitigating factors that would hinder objectivity and minimize intrusion of the researcher’s subjectivity, and they are both embedded in positivist attitudes, with a researcher sitting outside the observed reality; Glaser exemplifies these through discovering and coding data, and using systematic comparative methods, whereas Strauss and Corbin maintain a similar distance through their analytical questions, hypotheses and methodological applications. They both engage in “silent authorship” and usually write about their data as distant experts (Charmaz and Mitchell 1996).
Constuctivist Grounded Theory
Constructivist grounded celebrates firsthand knowledge of empirical worlds, takes a middle ground between postmodernsm and positivism, and offers accessible methods for taking qualitative research into the 21st century. (510)
The power of grounded theory lies in its tools for understanding empirical worlds. We can reclaim these tools from their positivist underpinnings to form a revised, more open-ended practice of grounded theory that stresses its emergent, constructivist elements. We can use grounded theory methods as flexible, heuristic strategies rather than as formulaic procedures. (510)
Three aspects to Charmaz’s argument (510):4
- Grounded theory strategies need not be rigid or prescriptive;
- a focus on meaning while using grounded theory furthers, rather than limits, interpretive understanding; and
- we can adopt grounded theory strategies without embracing the positivist leanings of earlier proponents of grounded theory.
Repudiation of the notion that data speak for themselves, that data do not lie. Recognition that data are constructs of the rsearch process, are framed by the questions we ask informants and the methodological tools of our collection procedures.
Charmaz (2000: 515) advocates for what seems to be a dialogical approach to coding, between researcher and the data:
We should interact with our data and pose questions to them while coding. Coding helps us to gain a new perspective on our material and to focus further data collection, and may lead us in unforeseen directions. Unline quantitative research that requires data to fit into preconceived standardized codes, the researcher’s interpretations of data shape his or her emergent codes in grounded theory.
Distinguishes articulates open/initial coding as proceeding line by line to get a general sense of what the data contains. It is meant to keep the researcher close to the data, to remain attuned to the subjects’ views of their realities.
Line-by-line coding sharpens our use of sensitizing concepts — that is, those background ideas that inform the overall research problem. Sensitizing concepts offer eays of seeing, organizing, and understanding experience; they are embedded in our disciplinary emphases and perspectival proclivities. Although sensitizing conceots may deepen perception, they provide starting points for building analysis, not ending points for evading it. We may use sensitizing concepts only as points of departure from which to study the data.
Much of the rest of the Charmaz (2000) paper is an overview of coding and memoing methods, as well as theoretical sampling. The emphasis is on situating these techniques in the Glaser / Strauss and Corbin debate, and it will be better to refer to Charmaz (2014) for in-depth notes on these techniques.
Charmaz (2000: 521-522) provides an apt account of a significant critique of grounded theory, and poses her constructivist approach as a potential means of resolving it. Specifically, she refers to the notion that grounded theory (as traditionally conceived by both Glaser and Strauss and Corbin) “fractures” the data, making them easier to digest in an analytical sense, but also making it more difficult to engage with in a holistic manner. This is precisely the point of the original approach, to present qualitative data as data — as conceived and valued by quantitative researchers, i.e. as discrete, corpuscular, disembodied, re-arrangable and distant entities. The text of these two large paragraphs is copied here:
Conrad (1990) and Riessman (1990) suggest that “fracturing the data” in grounded theory research might limit understanding because grounded theorists aim for analysis rather than the portrayal of subjects’ experience in its fullness. From a grounded theory perspective, fracturing the data means creating codes and categories as the researcher defines themes within the data. Glaser and Strauss (1967) propose this strategy for several reasons: (a) to help the researcher avoid remaining immersed in anecdotes and stories, and subsequently unconsciously adopting subjects’ perspectives; (b) to prevent the researcher’s becoming immobilized and overwhelmed by voluminous data; and (c) to create a way for the researcher to organize and interpret data. However, criticisms of fracturing the data imply that grounded theory methods lead to separating the experience from the experiencing subject, the meaning from the story, and the viewer from the viewed. In short, the criticisms assume that the grounded theory method (a) limits entry into subjects’ worlds, and thus reduces understanding of their experience; (b) curtails representation of both the social world and subjective experience; (c) relies upon the viewer’s authority as expert observer; and (d) posits a set of objectivist procedures on which the analysis rests.
Researchers can use grounded theory methods to further their knowledge of subjective experience and to expand its representation while neither remaining external from it nor accepting objectivist assumptions and procedures. A constructivist grounded theory assumes that people create and maintain meaningful worlds through dialectical processes of conferring meaning on their realities and acting within them (Bury 1986; Mishler 1981). Thus social reality does not exist independent of human action. Certainly, my approach contrasts with a number of grounded theory studies, methodological statements, and research texts (see, e.g., Chenitz and Swanson 1986; Glaser 1992; Martin and Turner 1986; Strauss and Corbin 1990; Turner 1981). By adopting a constructivist grounded theory approach, the researcher can move grounded theory methods further into the realm of interpretive social science consistent with a Blumerian (1969) emphasis on meaning, without assuming the existence of a unidimensional external reality. A constructivist grounded theory recognizes the interactive nature of both data collection and analysis, resolves recent criticisms of the method, and reconciles positivist assumptions and postmodernist critiques. Moreover, a constructivist grounded theory fosters the development of qualitative traditions through the study of experience from the standpoint of those who live it.
Charmaz’s (2000: 523) proposal for a re-visioned grounded theory poses research as a materializing process:
A re-visioned grounded theory must take epistemological questions into account. Grounded theory can provide a path for researchers who want to continue to develop qualitative traditions without adopting the positivistic trappings of objectivism and universality. Hence the further development of a constructivist grounded theory can bridge past positivism and a revised future form of interpretive inquiry. A revised grounded theory preserves realism through gritty, empirical inquiry and sheds positivistic proclivities by becoming increasingly interpretive.
Charmaz (2000: 523) addresses realism and truth in constructivist grounded theory, and explicitly relates it to Blumerian situated interactionism:
A constructivist grounded theory distinguishes between the real and the true. The constructivist approach does not seek truth — single, universal, and lasting. Still, it remains realist because it addresses human realities and assumes the existence of real worlds. However, neither human realities nor real worlds are unidimensional. We act within and upon our realities and worlds and thus develop dialectical relations among what we do, think, and feel. The constructivist approach assumes that what we take as real, as objective knowledge and truth, is based upon our perspective (Schwandt 1994). The pragmatist underpinnings in symbolic interactionism emerge here. Thomas and Thomas (1928: 572) proclaim, “If human beings define their situations as real, they are real in their consequences”. Following their theorem, we must try to find what research participants define as real and where their definitions of reality take them. The constructivist approach also fosters our self-consciousness about what we attribute to our subjects and how, when, and why researchers portray these definitions as real. Thus the research products do not constitute the reality of the respondents’ reality. Rather, each is a rendering, one interpretation among multiple interpretations, of a shared or individual reality. That interpretation is objectivist only to the extent that it seeks to construct analyses that show how respondents and the social scientists who study them construct those realities — without viewing those realities as unidimensional, universal, and immutable. Researchers’ attention to detail in the constructivist approach sensitizes them to multiple realities and the multiple viewpoints within them; it does not represent a quest to capture a single reality.
Thus we can recast the obdurate character of social life that Blumer (1969) talks about. In doing so, we change our conception of it from a real world to be discovered, tracked, and categorized to a world made real in the minds and through the words and actions of its members. Thus the grounded theorist constructs an image of a reality, not the reality — that is, objective, true, and external.
On the other hand, Charmaz (2000: 524) frames objectivist grounded theory as believing in some kind of truth:
Objectivist grounded theory accepts the positivistic assumption of an external world that can be described, analyzed, explained, and predicted: truth, but with a small t. That is, objectivist grounded theory is modifiable as conditions change. It assumes that different observers will discover this world and describe it in similar ways That’s correct — to the extent that subjects have comparable experiences (e.g., people with different chronic illnesses may experience uncertainty, intrusive regimens, medical dominance) and viewers bring similar que-tions, perspectives, methods, and, subsequently, concepts to analyze those experiences. Objectivist grounded theorists often share assumptions with their research participants — particularly the professional participants. Perhaps more likely, they assume that respondents share their meanings. For example, Strauss and Corbin’s (1990) discussion of independence and dependence assumes that these terms hold the same meanings for patients as for researchers.
Charmaz (2000: 525) further embeds construvist grounded theory as a way to fulfill Blumer’s symbolic interactionism:
What helps researchers develop a constructivist grounded theory? How might they shape the data collection and analysis phases? Gaining depth and understanding in their work means that they can fulfill Blumer’s (1969) call for “intimate familiarity” with respondents and their worlds (see also Lofland and Lofland 1984, 1995). In short, constructing constructivism means seeking meanings — both respondents’ meanings and researchers’ meanings.
Charmaz (2000: 524) on the concretization of procedures from what were orginally meant to be guidelines:
Guidelines such as those offered by Strauss and Corbin (1990) structure objectivist grounded theorists’ work. These guidelines are didactic and prescriptive rather than emergent and interactive. Sanders (1995: 92) refers to grounded theory procedures as “more rigorous than thou instructions about how information should be pressed into a mold”. Strauss and Corbin categorize steps in the process with scientific terms such as axial coding and conditional matrix (Strauss 1987; Strauss and Corbin 1990, 1994). As grounded theory methods become more articulated, categorized, and elaborated, they seem to take on a life of their own. Guidelines turn into procedures and are reified into immutable rules, unlike Glaser and Strauss’s (1967) original flexible strategies. By taking grounded theory methods as prescriptive scientific rules, proponents further the positivist cast to obiectivist grounded theory.
On the modes of reasoning behind grounded theory
Kelle (2005) is an overview of the Glaser / Strauss and Corbin split. References to Kelle (2005) have no page numbers since it is published in an online-only journal and does not specify paragraph numbers.
Highlights a primary impetus behind Glaser and Strauss (1967), which used political analogies to distinguish between “theoretical capitalists” and “proletariat testers”, and unify the field of sociology by de-centering emphasis on theories developed by “great men”.
A common thread in this paper is sensitivity to the practical challenges of actually doing grounded theory according to Glaser’s approach:
The infeasibility of an inductivist research strategy which demands an empty head (instead of an “open mind”) cannot only be shown by epistemological arguments, it can also be seen in research practice. Especially novices in qualitative research with the strong desire to adhere to what they see as a basic principle and hallmark of Grounded Theory — the “emergence” of categories from the data — often experience a certain difficulty: in open coding the search for adequate coding categories can become extremely tedious and a subject of sometimes numerous and endless team sessions, especially if one hesitates to explicitly introduce theoretical knowledge. The declared purpose to let codes emerge from the data then leads to an enduring proliferation of the number of coding categories which makes the whole process insurmountable.
Kelle (2005) basically takes down the original Glaser and Strauss (1967) and subsequent reflection on theoretecal sensitivity (Glaser 1978). He highlights fundamental contraditions and oversights with regards to the role of theory in grounded theory, specifically with regards to the notion that such research can be accomplished with inductive purity:
Consequently, in the most early version of Grounded Theory the advice to employ theoretical sensitivity to identify theoretical relevant phenomena coexists with the idea that theoretical concepts “emerge” from the data if researchers approach the empirical field with no preconceived theories or hypotheses. Both ideas which have conflicting implications are not integrated with each other in the Discovery book. Furthermore, the concept of theoretical sensitivity is not converted into clear cut methodological rules: it remains unclear how a theoretically sensitive researcher can use previous theoretical knowledge to avoid drowning in the data. If one takes into account the frequent warnings not to force theoretical concepts on the data one gets the impression that a grounded theorist is advised to introduce suitable theoretical concepts ad hoc drawing on implicit theoretical knowledge but should abstain from approaching the empirical data with ex ante formulated hypotheses.
Kelle (2005) recognizes that Glaser identified a series of “theoretical families” to help assist with the practical experience of coding. I find it somewhat interesting that many of the terms in these first families are very reminiscient of so-called “natural language”, as used in the wave of cybernets that was contemporary with Glaser (1978) and which largely dealt with “expert systems”.
In the book “Theoretical Sensitivity” (1978) GLASER presents an extended list of terms which can be used for the purpose of theoretical coding loosely structured in the form of so called theoretical “coding families”. Thereby various theoretical concepts stemming from different (sociological, philosophical or everyday) contexts are lumped together, as for example:
- terms, which relate to the degree of an attribute or property (“degree family”), like “limit”, “range”, “extent”, “amount” etc.,
- terms, which refer to the relation between a whole and its elements (“dimension family”), like “element”, “part”, “facet”, “slice”, “sector”, “aspect”, “segment” etc.,
- terms, which refer to cultural phenomena (“cultural family”) like “social norms”, “social values”, “social beliefs” etc.
This is substantiated by other observations by Kelle (2005) that ad hoc coding actually follows implicit theoretical knowledge:
One of the most crucial differences between GLASER’s and STRAUSS’ approaches of Grounded Theory lies in the fact that STRAUSS and CORBIN propose the utilization of a specified theoretical framework based on a certain understanding of human action, whereas GLASER emphasises that coding as a process of combining “the analyst’s scholarly knowledge and his research knowledge of the substantive field” (1978, p.70) has to be realised ad hoc, which means that it has often to be conducted on the basis of a more or less implicit theoretical background knowledge.
and that the Glaserian approach is better suited for more experienced, rather than novice sociologists, who will have internalized the theory that they then apply in their coding.
Kelle then goes on to address how grounded theory can or can not be applied in alignment with inductivist or hypothetic-deductivist reasoning, and raises abductive reasoning an an alternative means of arriving at legitimate and verifiable conclusions. There is too much detail in the paper to copy here.
But here is another nice conclusive gem from the end:
Whereas STRAUSS and CORBIN pay a lot of attention to the question how grounded categories and propositions can be further validated, GLASER’s concept shows at least a gleam of epistemological fundamentalism (or “certism”, LAKATOS 1978) especially in his defence of the inductivism of early Grounded Theory. “Grounded theory looks for what is, not what might be, and therefore needs no test” (GLASER 1992, p.67). Such sentences carry the outmoded idea that empirical research can lead to final certainties and truths and that by using an inductive method the researcher may gain the ability to conceive “facts as they are” making any attempt of further corroboration futile.
Rebuttals by Glaser
Glaser (2002) constitutes a rebuttal to Charmaz (2000). As Bryant (2003) points out in his response to Glaser (2002), it is very angry, polemical and irrational. I don’t want to go too in depth with the fundamental problems with Glaser’s response (see Bryant’s paper for the details), but the gist is that Glaser never really got the message about data being inherently constructed by researchers decisions, actions and circumstances. Glaser seems to continue believing in the inherent neutrality of data as a matter of faith.
This being said, Glaser (2002) did highlight the large emphasis on descriptive rather than explanatory potential in Charmaz’s approach. This aligns with my own apprehensions when I try to address the relevance of my work. I tend to use the term “articulate” as a way to frame my work as descriptive, but in a way that lends value, and this very fuzzy distinction between the power of identying the shapes and relationships among things and explaining their causes and effects in a generalizable way (i.e., theories, or explanations), still somehow troubles me. I wonder if Glaser is drawing a false distinction here, and through that, a false prioritization of explanation over description as a desired outcome. This would put my mind at ease, as would dismissing Glaser’s dismissal of people who simply don’t know how to do the “real” grounded theory (which, in his mind, include all feminist and critical researchers).
On the functional and pragmatic roots of grounded theory
I completely agree with this statement from Clarke (2003: 555):
To address the needs and desires for empirical understandings of the complex and heterogeneous worlds emerging through new world orderings, new methods are requisite (Haraway 1999). I believe some such methods should be epistemologically/ ontologically based in the pragmatist soil that has historically nurtured symbolic interactionism and grounded theory. Through Mead, an interactionist grounded theory has always had the capacity to be distinctly perspectival in ways fully com patible with what are now understood as situated knowledges. This fundamental and always already postmodern edge of a grounded theory founded in symbolic interactionism makes it worth renovating.
This is super interesting, and really contextualizes how Strauss imagined grounded theory to be useful for him:
Some years ago, Katovich and Reese (1993:400–405) interestingly argued that Strauss’s negotiated order and related work recuperatively pulled the social around the postmodern turn through its methodological [grounded theoretical] recognition of the partial, tenuous, shifting, and unstable nature of the empirical world and its constructedness. I strongly agree and would argue that Strauss also furthered this “postmodernization of the social” through his conceptualizations of social worlds and arenas as modes of understanding the deeply situated yet always also fluid orga nizational elements of negotiations. He foreshadowed what later came to be known as postmodern assumptions: the instability of situations; characteristic changing, porous boundaries of both social worlds and arenas; social worlds seen as mutually constitutive/coproduced through negotiations taking place in arenas; negotiations as central social processes hailing that “things can always be otherwise”; and so on. Significantly, negotiations constitute discourses that also signal micropolitics of power as well as “the usual” meso/macrostructural elements—power in its more fluid forms (e.g., Foucault 1979, 1980). Through integrating the social worlds/arenas/ negotiations framework with grounded theory as a new conceptual infrastructure, I hope to sustain and extend the methodological contribution of grounded theory to understanding and elaborating what has been meant by “the social” in social life — before, during, and after the postmodern turn.
It also echoes Charmaz’s vision of grounded theory as a powerful too, and Bryant’s (2003) call to “look at what Glaser and Strauss actually did, rather than what they claimed — and continued to claim — they were doing” to uncover “the basis for a powerful research approach”. Bryant (2003) further cites Baszanger and Dodier (1997), who characterize grounded theory as a method “consisting of accumulating a series of individual cases, of analyzing them as a combination between different logics of action that coexist not only in the field under consideration, but even within these individuals or during their encounters”. Bryant (2003) summarizes this by stating that “[t]he aim of such methods is generalization rather than totalization, with the objective of producing”a combinative inventory of possible situations”.
Theoretical sampling
From Charmaz (2000: 519):
We use theoretical sampling to develop our emerging categories and to make them more definitive and useful. Thus the aim of this sampling is to refine ideas, not to increase the size of the original sample. Theoretical sampling helps us to identify conceptual boundaries and pinpoint the fit and relevance of our categories.
From Charmaz (2000: 519) on the role of theoretical sampling in re-iterative data collection:
The necessity of engaging in theoretical sampling means that we researchers cannot produce a solid grounded theory through one-shot interviewing in a single data collection phase. Instead, theoretical sampling demands that we have completed the work of comparing data with data and have developed a provisional set of relevant categories for explaining our data. In turn, our categories take us back to the field to gain more insight about when, how, and to what extent they are pertinent and useful.
From Charmaz (2000: 520) on the notion of “saturation”:
Grounded theory researchers take the usual criteria of “saturation” (i.e., new data fit into the categories already devised) of their categories for ending the research (Morse 1995). But what does saturation mean? In practice, saturation seems elastic (see also Flick 1998; Morse 1995). Grounded theory approaches are seductive because they allow us to gain a handle on our material quickly. Is the handle we gain the best or most complete one? Does it encourage us to look deeply enough? The data in works claiming to be grounded theory pieces range from a handful of cases to sustained field research. The latter more likely fulfills the criterion of saturation and, moreover, has the resonance of intimate familiarity with the studied world. As we define our categories as saturated (and some of us never do), we rewrite our memos in expanded, more analytic form. We put these memos to work for lectures, presentations, pa-pers, and chapters. The analytic work continues as we sort and order memos, for we may discover gaps or new relationships.
From Clarke (2003: 557):
Unique to this approach has been, first, its requiring that analysis begin as soon as there are data. Coding begins immediately, and theorizing based on that coding does as well, however provisionally (Glaser 1978). Second, “sampling” is driven not necessarily (or not only) by attempts to be “representative” of some social body or population (or its heterogeneities) but especially and explicitly by theoretical concerns that have emerged in the provisional analysis. Such “theoretical sampling” focuses on finding new data sources (persons or things) that can best explicitly address specific theoretically interesting facets of the emergent analysis. Theoretical sampling has been integral to grounded theory from the outset, remains a fundamental strength of this analytic approach, and is crucial for the new situational analyses.
Butler, Copnell, and Hall (2018) provide concrete examples of theoretical sampling in practice. They point out that many studies that claim to follow grounded theory do not adequately document their implementation of theoretical sampling, providing no evidence of how it was used or failing to link theoretical sampling to stages of theory development.
Butler, Copnell, and Hall (2018) on the difference between “purposeful” sampling which occurs at the start of a project, and theoretical sampling whichn occurs after you get the ball rolling:
In constructivist grounded theory studies, data collection begins with purposeful sampling. Initial participants or sources of data are chosen based on their experiences of the area under study or abil- ity to inform the early research questions (Charmaz 2014; Currie 2009). However, according to Charmaz (2014), this early sampling strategy offers only a starting point: somewhere to launch the data collection process rather than a definitive strategy to develop the overall theory. The criteria used in early purposeful sampling are not the same as those used during the theoretical sampling process. Instead, the criteria which guide theoretical sampling decisions change throughout a study, as ideas and insights into the data develop and change.
Butler, Copnell, and Hall (2018) on identifying when to switch from purposeful to theoretical sampling:
Though some grounded theorists believe that theoretical sampling can start after a single interview, suggesting that all that is required are beginning concepts that warrant further exploration (Corbin and Strauss 2014), Charmaz (2014) asserts that theoretical sampling cannot begin until tentative categories have developed, which is unlikely to occur after a single interview. This is because, from a constructivist standpoint, the purpose of theoretical sampling is to narrow the researcher’s focus towards the developing categories in order to refine them, explore their boundaries, identify their properties, and discover relationships between them (Charmaz 2014).
Butler, Copnell, and Hall (2018) on identifying when saturation occurs (the point at which no category properties are gleaned when new data are added, and the categories are robust enough to encompass the variations present in the study):
Theoretical sampling contin-ues until data saturation occurs. It is often difficult to determineexactly when this occurs, but for most grounded theorists, saturation marks the point at which no new properties of the categoriesare gleaned when new data is added, and the categories are robustenough to encompass the variations present in the study (Charmaz 2014; Maz 2013).
The examples provided by Butler, Copnell, and Hall (2018) focus on how theoretical sampling helped introduce new research sites, adapt the interview questions, and seeking new participant characteristics.
Morse and Clark (2019: 146) on the purpose of sampling in qualitative research: “In qualitative inquiry, sampling enables access to new dimensions of the topic that arise during reflexive inquiry.”
They go on:
In qualitative inquiry, first sampling is based on the researcher’s need to understand the phenomenon. The researcher’s understanding builds incrementally as the study pro gresses: who is invited to participate in the study (i.e., the sample) is determined by what they know about the topic: that is, what they may contribute –– their experience, role, and so forth. As this requisite knowledge changes throughout the study, so does the type of par ticipant who is invited to participate change.
The rest of Morse and Clark (2019) identifies various factors with regards to sample coverage that are important to consider as the project evolves, and suggests ways to refine the sample as the work progresses. I think I may return to this if/when I hit any roadblocks, but at this point I’m not getting much out of this text.
Data Collection
Interviews
From Charmaz (2000: 525):
A constructivist approach necessitates a relationship with respondents in which they can cast their stories in their terms. It means listening to their stories with openness to feeling and experience. … Furthermore, one-shot interviewing lends itself to a partial, sanitized view of experience, cleaned up for public discourse. The very structure of an interview may preclude private thoughts and feelings from emerging. Such a structure reinforces whatever proclivities a respondent has to tell only the public version of the story. Researchers’ sustained involvement with research participants lessens these problems.
Fontana and Frey (2000) spend some time writing about the emergence of an “interview society”, whereby interviews are commonly used to seek various forms of biographical information. They cite Gubrium and Holstein (1998), who noted that “the interview has become a means of contemporary storytelling, where persons divulge life accounts in response to interview inquiries”. They then go over a brief history of interviewing in the context of sociological research, which largely tracks the values underlying positivist and postmodernist transitions as you might expect.
Yin (2014: 110-113) differentiates between three kinds of interviews:
Prolonged interviews: Usually over two hours long, or over an extended period of time covering multiple sittings.
You can ask interviewees about their interpretations and opinions about people and events or their insights, explanations, and meanings related to certain occurrences. You can then use such propositions as the basis for further inquiry, and the interviewee can suggest other persons for you to interview, as well as other sources of evidence. The more that an interviewee assists in this manner, the more that the role may be considered one of an “informant” rather than a participant. Key informants are often critical to the success of a case study. Such persons can provide you with insights into a matter and also give you access to other interviewees who may have corroboratory or contrary evidence.
Shorter interviews: More focused, around one hour, open-ended and conversational but largely following the protocol.
A major purpose of such an interview might simply be to corroborate certain findings that you already think have been established, but not to ask about other topics of a broader, open-ended nature. In this situation, the specific questions must be carefully worded, so that you appear genuinely naive about the topic and allow the interviewee to provide a fresh commentary about it; in contrast, if you ask leading questions, the corroboratory purpose of the interview will not have been served. … As an entirely different kind of example, your case study protocol might have called for you to pay specific attention to an interviewee’s personal rendition of an event. In this case, the interviewee’s perceptions and own sense of meaning are the material to be understood. … In both [situations], you need to minimize a methodological threat created by the conversational nature of the interview.
Survey interviews: A more structured questionnaire. Usually works best as one component of multiple sources of evidence.
Structured interviewing
From Fontana and Frey (2000: 649-651):
Interviewers ask respondents a series of preestablished questions with a limited set of response categories. The interview records responses according to a preestablished coding scheme.
Instructions to interviewers often follow these guidelines:
- Never get involved in long explanations of the study; use the standard explanation provided by the supervisor.
- Never deviate from the study introduction, sequence of questions, or question wording.
- Never let another person interrupt the interview; do not let another person answer for the respondent or offer his or her opinions on the question.
- Never suggest an answer or agree or disagree with an answer. Do not give the respondent any idea of your personal views on the topic of the question or the survey.
- Never interpret the meaning of a question; just repeat the question and give instructions or clarifications that are provided in training or by the supervisors.
- Never improvise, such as by assing answer categories or making wording changes.
The interviewer must establish a “balanced rapport”, being casual and friendly while also directive and impersonal. Interviewers must also perfect a style of “interested listening” that rewards respondents’ participation but does not evaluate their responses.
From Fontana and Frey (2000: 651):
This kind of interview often elicits rational responses, but it overlooks or inadequately assesses the emotional dimension.
Morse and Clark (2019: 150) describe various interview strategies in relation to their reflexive potential vis-a-vis theoretical sampling. With regards to semi-structured interviews,5 which is my favoured approach for this project, they note that this technique is often used in grounded theory, “because data are analyzed all at once at the end of data collection, much of the reflexivity required for the sampling strategies necessary for excellent grounded theory … is lost”.
Group interviews
From Fontana and Frey (2000: 651-652):
Can be used to test a methodological technique, try out a definition of a research problem or to identify key informants. Pre-testing a questionnaire or survey design.
Can be used to aid respondents’ recall of specific events or to stimulate embellished descriptions of events, or experiences shared by members of a group.
In formal group interviews, participants share views through the coordinator.
Less formal group interviews are meant to establish the widest range of meaning and interpretation on a topic, and the objective is “to tap intersubjective meaning with depth and diversity”.
Unstructured interviewing
From Fontana and Frey (2000: 652-657):
The essence of an unstructured interview is establishing a human-to-human relation with the respondent and a desire to understand rather than to explain.
Fontana and Frey (2000) then goes on with some practical guidance on how to engage in unstructured interviews, largely concerned with how to access a community and relate with respondents.
Postmodern takes on interviewing
Fontana and Frey (2000) address some “new” takes on interviewing emerging from the postmodern turn. I kinda think there is some potential behind approaches that emphasize interviews as negotiated accomplishment, or product of communal sensemaking between interviewer and respondent. I think it could be really helpful in the context of my research, which is very concerned with drawing out tensions that respondents have in mind but are not really able to articulate in a systematic way.
However, Fontana and Frey (2000) also draws attention to crticism of highly engaged interviewing approaches, which seem to equate closeness with the respondent as getting closer to their “true self”, and which may not actually be fixed (especially in the context of the artificial environment of the interview setting). Critiques of such “closeness” use the term “romantic” or “crusading” as epithets. Moreover, there is additional reference to the culturally-embedded assumption that interviews are necessarily valuable sources of information, as if speaking ones mind can adequately convey one’s thoughts and experiences — this is criticized as a particularly western approach to information extraction surrounding internalized and externalized thoughts and behaviour, as instilled through participation in the “interview society” addressed earlier in the text.
Transcribing
This section describes how I transcibe interviews and accounts for the decisions to encode certain things and not others. It goes on to explains the procedures for transcribing spoken dialog into textual formats, including the notation applied to encode idiosyncratic elements of conversational speech.
Transcript notation
Derived from the transcription protocol applied for the E-CURATORS project.
Cleaning audio
To clean the audio:
- Select a clip that is representative of a single source of background noise, and then filter that wavelength throughout the entire audio file.
- After selecting the clip, go to
Effect >> Noise Reduction
and selectGet Noise Profile
, then pressOK
. - Close the noise reduction menu, select the entire range of audio using the keyboard shortcut
Command + A
. - Then go back to the noise reduction window (
Effect >> Noise Reduction
) to apply the filter based on the noise profile identified for the noisy clip. - Export the modified audio file to the working directory (
File >> Export >> Export as .WAV
). - Use
ffmpeg
to replace the dirty audio track with the clean one:
ffmpeg -i dirty.mp4 -i clean.wav -c:v copy -map 0:v:0 -map 1:a:0 clean.mp4
Field notes
See (Yin 2014: 124-125)
Focus groups
From Kitzinger (1994); Wilkinson (1998); Parker and Tritter (2006); Morgan and Hoffman (2018).
Crucially, focus groups involve the interaction of group participants with each other as well as with the moderator, and it is the collection of this kind of interactive data which distinguishes the focus group from the one-to-one interview (c.f. Morgan 1988), as well as from procedures which use multiple participants but do not permit interactive discussion (c.f. Stewart and Shamdasani 1990). The ‘hallmark’ of focus groups, then, is the ‘explicit use of group interaction to produce data and insights that would be less accessible without the interaction found in a group’ (Morgan 1997: 2).
Wilkinson (1998) identifies three key features of focus group methods:
- providing access to participants’ own language, concepts and concerns;
- “The relatively free flow of discussion and debate between members of a focus group offers an excellent opportunity for hearing ‘the language and vernacular used by respondents’”
- Useful for gaining insight into participants’ conceptual worlds on their own terms
- “Sometimes, the participants even offer the researcher a ‘translation’ of unfamiliar terms or concepts”
- “Focus group interactions reveal not only shared ways of talking, but also shared experiences, and shared ways of making sense of these experiences. The researcher is offered an insight into the commonly held assumptions, concepts and meanings that constitute and inform participants’ talk about their experiences.”
- encouraging the production of more fully articulated accounts;
- “In focus groups people typically disclose personal details, reveal discrediting information, express strong views and opinions. They elaborate their views in response to encouragement, or defend them in the face of challenge from other group members: focus groups which ‘take off’ may even, like those run by Robin Jarrett (1993: 194) have ‘the feel of rap sessions with friends’.”
- “Even when bcus group participants are not acquainted in advance, the interactive nature of the group means that participants ask questions of, disagree with, and challenge each other, thus serving ‘to elicit the elaboration of responses’ (Merton 1987: 555).”
- “Other ethical issues in focus group research stem from group dynamics, insofar as participants can collaborate or collude effectively to intimidate and/or silence a particular member, or to create a silence around a particular topic or issue, for example. In such cases, it falls to the group moderator to decide whether/how to intervene, and it can be difficult to balance such conflicting goals as ensuring the articulation of accounts, supporting individuals, and challenging offensive statements.”
- and offering an opportunity to observe the process of collective sense-making.
- Focus groups also offer an opportunity for researchers to see exactly how views are constructed, expressed, defended and (sometimes) modified during the course of conversations with others, 1.e. to observe the process of collective sense-making in action. This is likely to be of particular interest to researchers working within a social constructionist framework, who do not view beliefs, ideas, opinions and understandings as generated by individuals in splendid isolation, but rather as built in interaction with others, in specific social contexts: as Radley and Billig (1996: 223) say, ‘thinking is a socially shared activity’. In a focus group, people are confronted with the need to make collective sense of their individual experiences and beliefs (Morgan and Spanish 1984: 259).
Wilkinson (1998) addresses opportunities and challenges in analyzing focus group data using content analysis and ethnographic techniques.
With regards to content analysis:
the main advantages of content analysis are to allow for a relatively systematic treatment of the data and to enable its presentation in summary form. … the researcher has first to decide on the unit of analysis: this could be the whole group, the group dynamics, the individual participants, or the participants’ utterances
Morgan (1997) proposes three distinct ways of coding focus group data: noting whether each group discussion contains a given code; noting whether each participant mentions a given code; and noting all mentions of a given code (i.e. across groups or participants). Once the data have been coded in one (or more) of these ways, the question of whether to quantify them is a further issue. Morgan (1993) argues the value of simple ‘descriptive counts’ of codes (stopping short of using inferential statistical tests, whose assumptions are unlikely to be met in focus groups)
And with regards to ethnographic analysis:
Its main advantage is to permit a detailed interpretative account of the everyday social processes of communication, talk and action occurring within the focus group. The key issue in ethnographic analysis is how to select the material to present (whether this is framed up as ‘themes’, ‘discourses’, or simply as illustrative quotations), without violating the ‘spirit’ of the group, and without losing sight of the specific context within which the material was generated. … A particular challenge is how to preserve the interactive nature of focus group data: a surprising limitation of published focus group research is the rarity with which group interactions are analysed and reported ( c.f. Carey and Smith 1994, Kitzinger 1994a). Extracts from focus group data are most commonly presented as if they were one-to-one interview data, often with no indication that more than one person is present; still more rarely does interaction per se constitute the analytic focus.
Kitzinger (1994) elaborates on the interactive nature of focus groups, which she identifies as the methods’ core feature.
Focus groups are group discussions organised to explore a specific set of issues sucb as people’s views and experiences … The group is ‘focused’ in the sense that it involves some kind of collective activity - such as viewing a film, examining a single health education message or simply debating a particular set of questions.
Even when group work is explicitly included as part of the research it is often simply employed as a convenient way to illustrate a theory generated by other methods or as a cost-effective technique for interviewing several people at once. Reading some such reports it is hard to believe that there was ever more than one person in the room at the same time. This criticism even applies to many studies which explicitly identify their methodology as ‘focus group discussion’ — in spite of the fact that the distinguishing feature of focus groups is supposed to be the use of interaction as part of the research data.
It would be naive, however, to assume that group data is by definition ‘natural’ in the sense that it would have occurred without the group having been convened for this purpose. It is important to note that although, at times, the focus groups may approximate to participant observation the focus groups are artifidally set up situations. Rather than assuming the group session unproblematically and inevitably reflects ‘everyday interactions’ (although sometimes it will) the group sbould be used to encourage people to engage with one another, verbally formulate their ideas and draw out the cognitive structures which previously have been unarticulated.
On the active role of the facilitator:
Sessions were conducted in a relaxed fashion with minimal intervention from the facilitator - at least at first. This allowed the facilitator to ‘find her feet’ and permitted the research participants to set the priorities. However, the researcber was never passive. Trying to maximise interaction between participants could lead to a more interventionist style: urging debate to continue beyond the stage it might otberwise have ended, challenging people’s taken for granted reality and encouraging them to discuss the inconsistencies both between participants and within their own thinking.
On the role of the presentation or activity:
Such exercises not only provided invaluable data from each group but allow for some cross-compadsons between groups. Each discussion session has its own dynamic and direction — when it comes to analysis it is extremely useful to have a common external reference point such as that provided by the card game or the use of vignettes (Kban and Manderson 1992).
On complementary interactions:
The excbange between the research participants not only allows the researcher to understand which advertisement they are talking about but to gather data on their shared perception of that image.
Brainstorming and loose word association was a frequent feature of the research sessions.
people’s knowledge and attitudes are not entirely encapsulated in reasoned responses to direct questions. Everyday forms of communication such as anecdotes, jokes or loose word association may tell us as much, if not more, about what people ‘know’. In this sense focus groups ‘reach the parts that other methods cannot reach’ - revealing dimensions of understanding that often remain untapped by the more conventional one-to-one interview or questionnaire.
In addition to the advantages discussed above fo exampie, discussing whether they had the ‘right’ to know if another child in the piay group bad had virus asserted that ‘you think of your own first’. It was this phrase, and these sort of sentiments, which seemed to capture their consent and resulted in nods of agreement round the group and assertions that ‘that’s right’ and ‘of course’. Indeed, it was often the strength of the collective reaction that highlighted the specific context within which the research participants experienced AIDS information.
On argumentative interactions:
the group process however, is not only about consensus and the articulation of group norms and experiences. Differences between individuals within the group are equally important and, in any case, rarely disappear from view. Regardless of how they are selected, the research participants in any one group are never entirely homogenous. Participants do not just agree with each other — they also misunderstand one another, question one another, try to persuade each other of the justice of their own point of view and sometimes they vehemently disagree.
Such unexpected dissent led them to clarify why they thpught as they did, often identifying aspects of their personal experience which bad altered their opinions or specific occasions which had made them re-think their point of view. Had the data been collected by interviews the researcber might have been faced with ‘arm chair’ theorizing about the causes of such difference but in a focus group these can be explored ‘in situ’ with the help of the research participants.
Close attention to the ways in which research participants tell stories to one another aiso prevents the researcber from assuming that she knows ‘the meaning’ of any particular anecdote or account.
Parker and Tritter (2006) discuss “key issues relating to the complexity and necessity of considering sampling issues within the context of focus group research and the implications this has for the collection and analysis of resultant data.”
Identifies some common logistical impetus for focus groups, relating to acquisition of funding:
Increasing pressure from research funding organizations to adopt multiple-method research strategies and the fact that focus groups generate far more data than a range of other methods in relation to face-to-face contact between researchers and participants, has added to [the method’s popularity].
Calls out the conflation between focus groups and group interviews:
A similarly pervasive trend in and around academic discussion of qualitative research methods is that focus groups are sometimes seen as synonymous with group interviews and it is this issue which constitutes our second point of contention. … In keeping with the views of a number of other writers in this field, we are of the opinion that there is a fundamental difference between these two research techniques and that the critical point of distinction surrounds the role of the researcher and her/ his relationship to the researched (Smithson, 2000). In group interviews the researcher adopts an ‘investigative’ role: asking questions, controlling the dynamics of group discussion, often engaging in dialogue with specific participants. This is premised on the mechanics of one-to-one, qualitative, in-depth interviews being replicated on a broader (collective) scale. A relatively straightforward scenario ensues: the researcher asks questions, the respondents relay their ‘answers’ back to the researcher. In focus groups the dynamics are different. Here, the researcher plays the role of ‘facilitator’ or ‘moderator’; that is, facilitator/moderator of group discussion between participants, not between her/himself and the participants. Hence, where focus groups are concerned, the researcher takes a peripheral, rather than a centre-stage role for the simple reason that it is the inter-relational dynamics of the participants that are important, not the relationship between researcher and researched (see Kitzinger, 1994a; Johnson, 1996).
Part of the problem of achieving this kind of interactional synergy in data collection is that, despite their collective interests, participants may not always be keen to engage with each other, or alternatively, may know each other so well that interaction is based on patterns of social relations that have little to do with the research intent of the focus group. The need to consider the impact on interaction of the constitution of the focus group requires that close attention be paid to methods (and outcomes) of recruit ment. … issues of sampling and selection are likely to prove crucial in relation to the form and quality of interaction in a focus group and therefore the kinds of data one gathers and the extent to which participants share their opinions, attitudes and life experiences.
A large part of what follows is an extremely honest account of the challenges involved in recruiting participants to sit for focus groups. The pressure on students to recruit their friends impacted their views on what views were represented and not represented. Moreover, they draw attention to the role of timing and convenience in recruitment strategy, and representativeness of expressed viewpoints.
I found it hard to take notes on this since it comes off as a continuous stream of conciousness. However it should still be read from top to bottom to get a sense of practical challenges that are not commonly addressed, save for among veterans and in private circles.
From Morgan and Hoffman (2018: 251), who define the key strengths and benefits of focus groups:
The strength of focus groups in this regard is the variety of different perspectives and experiences that participants reveal during theirinterac tive discussion. This is especially important in the twin processes of sharing and comparing, which create dynamics that are not available in individual interviews. This means that focus groups are especially useful for investigating the extent of both consensus and diversity among the participants, as they engage in sharing and comparing among themselves with the moderator in a facilitating role. By comparison, individual interviews provide a degree of depth and detail on each participant that is not available in focus groups.
From Morgan and Hoffman (2018: 251-252) on complementary aspects of focus groups and individual interviews:
Often, they are best seen as complementary rather than competing methods. For example, individual key inform ant interviews can be a good starting point for planning a future set of focus groups. Alternatively, individual interviews can be a useful follow-up to focus groups, giving more opportunities to hear from participants whose thoughts and experiences are worth pursuing further. Finally, either individual interviews or focus groups can be used as ‘member checks’, where one method is used to get feedback after the researcher has done preliminary analyses on the data from another method.
From Morgan and Hoffman (2018: 252) on combining focus groups with quantitative research methods:
One common role for focus groups in mixed methods is to provide preliminary inputs to the development of either survey instruments or program interventions. In this case, the success of the quantitative portion of the project depends on having materials that work well for the participants, and focus groups can provide participants’ voices during this development phase. Focus groups can also be equally use ful for following-up on surveys and experimental studies. In this case, the typical goal is to extend what was learned with the quantitative data by gaining a better sense of how and why those quantitative results came about.
From Morgan and Hoffman (2018: 252) on comparing data deriving from focus groups and individual interviews:
One trap to avoid in this regard is the assumption that individual interviews repre sent a kind of ‘gold standard’ where the focus group introduces an element of bias due to the influences of the group on the individual (see Morrison 1998 for an example of this argument). This assumes that each person has one ‘true’ set of attitudes that will be revealed only in the presence of a researcher in a one-to-one interview, rather than in a group setting with peers. Instead of arguing about whether one of these data collection formats is better than the other, it is more useful to treat them as different contexts –– which may well produce different kinds of data.
From Morgan and Hoffman (2018: 255) on group size:
Size is a crucial consideration in decisions about group composition. Typically, focus groups range in size from 5 to 10 people. Smaller sizes are particularly appropriate for sensitive topics and/or situations where the participants have a high level of engage ment with the topic.
From Morgan and Hoffman (2018: 255) on dyadic interviews:
Dyadic interviews are similar to focus groups in that they seek to accomplish the ‘sharing and comparing’ dimension in inter action, but they limit the dynamic to a conversation between two people, rather than the complexity that can arise when multiple participants engage in a lively discussion. Similar to standard focus groups, the mod erator is there primarily to help the respondents establish rapport and produce rich data. Dyadic interviews are especially well suited to interviewing spouses, and, are thus frequently used in the family studies literature.
From Morgan and Hoffman (2018: 255-256) on open-ended or restricted styles:
One alternative is to use less-structured interviews which create a ‘bubbling up’ of a wide range of potentially unanticipated responses. This approach necessarily uses fewer questions, with each lasting around 15 to 20 minutes. These interviews work best when the goal is to hear the participants’ wide-ranging thoughts about the topic, with less emphasis on the specific types of questions that the researcher feels are important about a particular topic. A disadvantage is that the participants may take the interview in a direction that is not necessarily productive for the overall project. Alternatively, more structured interviews –– with more targeted questions –– reduce this problem, but at the cost of restricting the participants’ frame of reference.
From Morgan and Hoffman (2018: 256) on the “funnel” approach:
a third option is a ‘funnel’ approach to interviewing, that includes both open-ended and highly targeted questions. Funnels work systematically from less-structured, open-ended questions to more structured, targeted questions. After introductions, the moderator begins the focus group with a broad question that is intended to engage the participants, by asking about the topic from their point of view. Subsequent questions successively narrow in on the research questions the researcher has in mind. For example, a funnel-oriented series of questions may begin with a question like ‘What do you think are the most pressing issues around gun safety in the United States?’ This could be followed by, ‘Of those you have mentioned, which do you think should receive the highest priority from our policymakers?’ and then more targeted questions such as ‘If you were going to contact your legislators about your concerns, what kinds of things would you say?’
From Morgan and Hoffman (2018: 256) on the “inverted funnel” approach:
Another alternative is the ‘inverted funnel approach’, where the questions begin with narrower topics and then broaden to the more open-ended. This approach can be helpful in cases where the participants themselves may not have an immediately available set of thoughts about the topic. This approach often begins by asking about examples of concrete experiences, and then moves to more abstract issues. For example, if you were interested in how the culture of a particular neighborhood was affected by gentrification over a specific period of time, it might be helpful to begin with a very targeted question of the appropriate
From Morgan and Hoffman (2018: 256) on establishing rapport among participants:
… there are two aspects of rapport in focus groups: between the moderator and the participants, and among the participants themselves. To foster both kinds of rapport, the initial introduction and first question(s) can help set a tone that is conducive to the goal of getting the participants to share and compare their thoughts and experiences. In focus groups, this often means a trade-off between asking questions that will get the participants talking with each other, versus concentrating on the things that are most directly important to the research. The key point is that the interaction among the participants is the source of the data, so the interview questions need to begin by generating discussions, which may mean delaying the most relevant questions until a good rapport has been established.
From Morgan and Hoffman (2018: 257) on the division of labour in focus groups, from a logistical perspective:
A common approach would be for the main moderator to manage the questions/discussion while an assistant observes, takes notes, and is available to help the moderator with any unanticipated needs. An especially important role for the assistant moderator is to ensure the recording equipment is functioning properly throughout the entire interview. This approach can also be advantageous to the research process if the two moderators debrief together afterward to co-create field notes as both moderators may have important –– but different –– observations.
From Morgan and Hoffman (2018: 257) on dual moderators:
Additionally, a ‘dual moderator’ approach can be taken wherein both interviewers facilitate the group discussion. In the latter approach, it is important that the modera tors each have a clear understanding of their respective roles so that the overall experience is enhanced. One common division of labor for this strategy involves one moderator who is more familiar with technical aspects of the topics and another who is more familiar with the group dynamics of facilitation. In addition, working as either an assistant moderator or half of a dual moderating team can be a useful technique for training new moderators.
From Morgan and Hoffman (2018: 258) on summary-based reporting:
The goal in Summary-Based Reporting is to determine which topics were most important to the participants through a descriptive account of the primary topics in the interviews. A simple standard for judging importance is whether a topic arose in nearly every focus group, as well as the extent to which it engaged the participants when it did arise. What matters is not just the frequency with which a topic is mentioned but also the level of interest and significance the participants attached to the topic. This requires a degree of judgment on the part of the analyst, but participants are usually clear about indicating which topics they find particularly important.
To examine these summaries, it is often helpful to create a group-by-question grid where each group is a row and each question is a column. The cells in this grid contain a summary of what a specific group said in response to a particular question. The most effective strategy for using this grid is to make comparisons across what each group said in response to each question. In essence, this moves from column to column, comparing what the different groups said in response to question number one, then question number two, and so on. The goal is to create an overall summary of what the full set of groups said about each question.
From Morgan and Hoffman (2018: 259) on content analysis:
Various forms of qualitative Content Analysis can be applied to focus groups regardless of whether the analysis is driven by counting or by more qualitative approaches. The analytic system can be derived deductively, inductively, or alterna tively through a combination of the two;
From Morgan and Hoffman (2018: 259) on thematic analysis:
The most widely cited version of Thematic Analysis was developed by Braun and Clarke (2006, 2012), who proposed a six-step process: (1) immersion in the data through repeated reading of the transcripts; (2) systematic coding of the data; (3) development of preliminary themes; (4) revision of those themes; (5) selection of a final set of themes; (6) organization of the final written product around those themes. This approach can also be applied in both a more deductive format, where codes are based on pre-existing theory, or a more inductive fashion, where codes are derived from the interviews themselves.
Morgan and Hoffman (2018: 259-261) also address the potential benefits and limitations of online focus groups.
Analysis
My QDA processes are most influenced by Kathy Charmaz and Johnny Saldaña, as well as the practical experiences instilled during my PhD and while working on E-CURATORS.
Sensitizing concepts
According to Bowen (2006: 13-14), (Blumer 1954, 1969) first established the term “senstizing concepts” to differentiate them with “definitive concepts”.
According to Blumer (1954: 7):
A definitive concept refers precisely to what is common to a class of objects, by the aid of a clear definition in terms of attributes or fixed bench marks. … A sensitizing concept lacks such specification of attributes or bench marks and consequently it does not enable the user to move directly to the instance and its relevant content. Instead, it gives the user a general sense of reference and guidance in approaching empirical instances. Whereas definitive concepts provide prescriptions of what to see, sensitizing concepts merely suggest directions along which to look.
From Kelle (2005):
Herbert Blumer invented the term “sensitizing concepts” to describe theoretical terms which “lack precise reference and have no bench marks which allow a clean cut identification of a specific instance” (1954, p.7). Sensitizing concepts are useful tools for descriptions but not for predictions, since their lack of empirical content permits researchers to apply them to a wide array of phenomena. Regardless how empirically contentless and vague they are, they may serve as heuristic tools for the construction of empirically grounded theories.
According to Charmaz (2003: 259), sensitizing concepts are “those background ideas that inform the overall research problem”. Charmaz goes onto to state that:
Sensitizing concepts offer ways of seeing, organizing, and understanding experience; they are embedded in our disciplinary emphases and perspectival proclivities. Although sensitizing concepts may deepen perception, they provide starting points for building analysis, not ending points for evading it. We may use sensitizing concepts only as points of departure from which to study the data.
In her textbook on constructivist grounded theory, Charmaz (2014: 30-31)
A sensitizing concept is a broad term without definitive characteristics; it sparks your thinking about a topic (Van den Hoonaard 1997). Sensitizing concepts give researchers initial but tentative ideas to pursue and questions to raise about their topics. Grounded theorists use sensitizing concepts as tentative tools for developing their ideas about processes that they define in their data. If particular sensitizing concepts prove to be irrelevant, then we dispense with them.
Thus, sensitizing concepts may guide but do not command inquiry, much less commandeer it (Charmaz, 2008e). Treat these concepts as points of departure for studying the empirical world while retaining the openness for exploring it. In short, sensitizing concepts can provide a place to start inquiry, not to end it.
Bowen (2006: 15) indicates that sensitizing concepts may emerge from reading the prior literature. In the example he provides, he treated them as variables against which he would read the text, and they thereby combined to served as an analytical frame, or a point of reference and guide in data analysis.6
Patton (2014) quotes Denzin (1978: 9), who he claims captured the essence of how sensitizing concepts guide fieldwork:7
The observer moves from sensitizing concepts to the immediate world of social experience and permits that world to shape and modify his conceptual framework. In this way he moves continually between the realm of more general social theory and the worlds of native people. Such an approach recognizes that social phenomena, while displaying regularities, vary by time, space, and circumstance. The observer, then, looks for repeatable regularities. He uses ritual patterns of dress and body-spacing as indicators of self-image. He takes special languages, codes and dialects as indicators of group boundaries. He studies his subject’s prized social objects as indicators of prestige, dignity and esteem hierarchies. He studies moments of interrogation and derogation as indicators of socialization strategies. He attempts to enter his subject’s closed world of interaction so as to examine the character of private versus public acts and attitudes.
Coding
These notes are largely derived from my reading of Saldaña (2016), provides a practical overview of what coding entails and specific methods and techniques.
Coding as component of knowledge construction:
- Coding is an intermediate step, “the”critical link” between data collection and their explanation or meaning” (from Charmaz 2001; as quoted in Saldaña 2016: 4).
- “coding is usually a mixture of data [summation] and data complication … breaking the data apart in analytically relevant ways in order to ead toward further questions about the data” (from Coffey and Atkinson 1996: 29-31; as quoted and edited in Saldaña 2016: 9).
- This relates to the paired notions of decodng when we reflect on a passage to decipher its core meaning, and encoding when we determine its appropriate code and label it (Saldaña 2016: 5).
- Coding “generates the bones of your analysis. … [I]ntegration will assemble those bones into a working skeleton” (from Charmaz 2014: 113; as quoted in Saldaña 2016: 9).
- To codify is to arrange things in a systematic order, to make something part of a system or classification, to categorize
- What I sometimes refer to as arranging the code tree
- What Saldaña (2016) refers to as categories, I tend to refer to as stubs
- Categories are arranged into themes or concepts, which in turn lead to assertions or theories
Pre-coding techniques
Saldaña (2016) identifies several techniques for formatting the data to make them easier to code, but also to imbue meaning in the text.
- Data layout
- Separation between lines or paragraphs may hold significant meaning
- Putting interviewer words in square brackets or capital letters
- Semantic markup
- Bold, italics, underline, highlight
- Meant to identify “codable moments” worthy of attention (Boyatzis 1998; as referenced in Saldaña 2016: 20)
- Relates to Saldaña’s (2016: 22) prompt: “what strikes you?”
- Preliminary jottings
- Tri-column exercise with the text on the left, first impression or preliminary code in the middle, and code on the right, after Liamputtong and Ezzy (2005): 270-273.
- Asking questions back to the interviewer, or participating in an imagined dialogue
- I imagine this might be useful in situations where the time to hold an interview is quite limited and I have to work with limited responses that don’t touch on everything I want to cover
- The form of questions maintains my tentativity, my unwillingness to commit or assume their responses, and opens the door for their own responses in rebuttal
Following Emerson et al. (2011: 177), Saldaña (2016: 22) identifies a few key questions to keep in mind while coding:
- What are people doing? What are they trying to accomplish?
- How, exactly, do they do this? What specific means and/or strategies do they use?
- How do members talk about, characterize, and understand what is going on?
- What assumptions are they making?
- What do I see going on here?
- What did I learn from these notes?
- Why did I include them?
- How is what is going on here similar to, or different from, other incidents or events recorded elsewhere in the fieldnotes?
- What is the broader import or significance of this incident or event? What is it a case of?
Memos
Charmaz (2014: 162) dedicates Chapter 7 to memo-writing, which she frames as “the pivotal intermediate step between data collection and writing drafts of papers.” She locates the power of memo-writing as the prompt to analyze the data and codes early in the research process, which requires the researcher to pause and reflect.
Charmaz (2014: 162):
Memos catch your thoughts, capture the comparisons and connections you make, and crystallize questions and directions for you to pursue.
Charmaz (2014: 162):
Memo-writing creates an interactive space for conversing with yourself about your data, codes, ideas, and hunches. Questions arise. New ideas occur to you during the act of writing. Your standpoints and assumptions can become visible. You will make discoveries about your data, emerging categories, the developing frame of your analysis — and perhaps about yourself.
Charmaz (2014: 164):
Memo-writing encourages you to stop, focus, take your codes and data apart, compare them, and define links between them. Stop and catch meanings and actions. Get them down on paper and into your computer files.
Charmaz (2014: 164):
Memos are your path to theory constriction. They chronicle what you grappled with and learned along the way.
Charmaz (2014: 165-168) distinguishes between memo-writing and journaling. The former is meant to be more analytical, whereas the latter is more of an account of a direct experience, including significant memories or recollections of moments that stood out (and reflection on why they stood out).
Charmaz (2014: 171) indicates that “[n]o single mechanical procedure defines a useful memo. Do what is possible with the material you have.” She then lists a few possible approaches to memo-writing:
- Define each code or category by its analytic properties
- Spell out and detail processes subsumed by the codes or categories
- Make comparisons between data and data, data and codes, codes and codes, codes and categories, categories and categories
- Bring raw data into the memo
- Provide sufficient empirical evidence to support your definitions of the category and analytic claims about it
- Offer conjectures to check in the field setting(s)
- Sort and order codes and categories
- Identify gaps in the analysis
- Interrogate a code or category by asking questions of it.
Charmaz (2014: 171) draws special attention on bringing data into the memo as a way to more concretely “ground” the abstract analysis in the data and lay the foundation for making claims about them:
Including verbatim material from different sources permits you to make precise comparisons right in the memo. These comparisons enable you to define patterns in the empirical world. Thus, memo-writing moves your work beyond individual cases.
Through a detailed example over the prior several pages, Charmaz (2014: 178) reflects on how memos may “[hint] at how sensitizing concepts, long left silent, may murmur during coding and analysus”. She recalls how writing a memo encouraged her to look back at ideas presented in pivotal texts that she had read earlier in her career, and thereby committed her to a new strand of thought.
Charmaz (2014: 180) describes how she developed memos from in vivo codes that recurred throughout across the cases. She asked how the saying was applied in different contexts, its overlapping and varied meaning.
Charmaz (2014: 183-?) encourages adoption of various writing strategies. She notes that “memo-writing requires us to tolerate ambiguity”, which is inherent in the “discovery phase” of research, in which she considers memo-writing to be a part. She encourages adotion of clustering and freewriting techniques to help get the ball rolling (she refers to these as “pre-writing techniques”).
Saldaña (2016) dedicates Chapter 2 to “writing analytic memos”. Saldaña (2016: 44) notes that codes are nothing more than labels until they are analyzed, and remarks that memo-writing is a stage in the process of getting beyond the data. He refers to Stake (1995: 19), who mused that “Good research is not about good methods as much as it is about good thinking”, and in keeping with Charmaz’s (2014) account of memo-writing, memos are tools for doing good thinking.
Saldaña (2016: 44-45) channels Charmaz in saying that all memos are analytic memos.8
I identify with his discomfort in writing according to a pre-defined category of memos, and his preference to categorizing them after writing the memo. He also suggests writing a title or brief description to help with sorting.
However, like Charmaz, Saldaña does differentiate between memos and “field notes”, which are synonymous with Charmaz’s journal entries. According to Saldaña (2016: 45), Field notes are the researcher’s personal and subjective responses to social actions encountered during data collection.
Saldaña (2016: 53-54) reports on O’Connor’s (2007: 8) conceptualization of contemplation of qualitative data as refraction.
This perspective acknowledes the mirrored reality and the researcher’s lens as dimpled and broken, obscured in places, operating as a concave or at other times a convex lens. As such, it throws unexpected and distorted images back. It does not imitate what looks into the mirror but deliberately highlights some things and obscures others. It is deliciously … unpredictable in terms of what might be revealed and what might remain hidden.
Other analogies include that by Stern (2007: 119): If data are the building blocks of the developing theory, memos are the mortar”, and by Birks and Mills (2015: 40) who consider memos as the “lubricant” of the analytic machine, and “a series of snapshots that chronicle your study experience”.
See Montgomery and Bailey (2007) and McGrath (2021) for more on the distinction between memos and field notes, including detailed examples of these kinds of writing in action.
Preliminary analyses
Saldaña (2016) dedicates Chapter 6 to “post-coding and pre-writing transitions”. He frames these as a series of strategies to help crystallize the analytical work and springboard into written documents and reports.
Top-10 lists
Saldaña (2016: 274-275) suggests coming up with a list of the ten top quotes or passages as a potentially useful “focusing strategy”. Identify the passages (no longer than half a page each) that strike me as the most vivid and/or representational of my study. He suggests reflecting on the content of these items and arranging them in various orders to discover different ways of structuring or outlining the write-up of the research story. He provides some examples of orders to consider:
- chronologically
- hierarchically
- telescopically
- episodically
- narratively
- from the expository to the dramatic
- from the mundane to the insightful
- from the smallest detail to the bigger picture
The study’s “trinity”
Saldaña (2016: 275-276) suggests identifying the three (and only three) major codes, categories, themes and/or concepts generated thus far that strike me or stand out in my study. He suggests identifying which one is dominant, how does this status relate to or impact the other codes or concepts, and generally trace the relationships between these ideas. He suggests plotting them as a venn diagram to identify what aspects overlap across items, and to label those areas of overlap — although he does not mention this explicitly, I imagine these points of overlap represent the synthesis of new emergent ideas.
Codeweaving
Saldaña (2016: 276) addresses codeweaving as a viable strategy, but I don’t actually think of it as that useful for my purposes. Seems a but contrived and has lots of potential to be over-engineered.
From coding to theorizing
For Saldaña (2016: 278), the stage at which he finds theories emerging in his mind is when he starts coming up with categories of categories. At this point, a level of abstraction occurs that transcends the particulars of a study, enabling generalizable transfer to other comparable contexts.
Saldaña (2016: 278-279) identifies a few structures through which these categories of categories might emerge:
- Superordindate and Subordinate Arrangements: Arrange categories as an outline, which suggests discrete linearity and classification. Supercategories and subcategories are “ranked” with numbers or capital letters.
- Taxonomy: Categories and their subcategories are grouped but without any inferred hierarchy; each category seems to have equal weight.
- Hierarchy: Categories are ordered from most to least in some manner, i.e. frequency, importance, impact, etc.
- Overlap: Some categories share particular features with others while retaining their unique properties.
- Sequential Order: The action suggested by categories progresses in a linear manner.
- Concurrency: Two or more categories operate simultaneously to influence and affect a third.
- Domino Effects: Categories cascade forward in multiple pathways.
- Networks: Categories interact and interplay in complex pathways to suggest interrelationship.
The arrows connecting categories are meaningful in their own right. Saldaña (2016: 280) references Urquhart (2013) who states that category relationships are necessary to develop assertions, propositions, hypotheses and theories. He suggests inserting words or phrases between categories that plausibly establishes their connections, as suggested by the data and analytical memos. Saldaña (2016: 280-281) lists several possible connectors:
- accelerates
- contributes toward
- depends on the types of
- drives
- facilitates
- harnesses
- increases
- increases the difficulty of
- is affected by
- is essential for
- is necessary for
- provides
- reconciles
- reduces
- results in
- results in achieving
- triggers
- varies according to
- will help to
Moreover, Saldaña (2016: 281) suggests that if you end up with categories as nouns or noun phrases, it could be helpful to transform them into gerund phrases. This will help get a better sense of process and action between catagories.
Findings “at a glance”
Following Henwood and Pidgeon (2003), Saldaña (2016: 283) suggests creating a tri-column chart that outlines the findings and the sources of evidence and reasoning that underlie them. See the specific page for a good example.
The constant comparative method
The constant comparative method is based on action codes, similar to what Saldaña (2016) refers to as process codes. According to Charmaz (2000: 515):
The constant comparative method of grounded theory means (a) comparing different people (such as their views, situations, actions, accounts, and experiences), (b) comparing data from the same individuals with themselves at different points in time, (c) comparing incident with incident, (d) comparing data with categori, and (e) comparing categories with other categories.
My initial impression is that this is very well suited for Stake’s (2006) multicase study framework, specifically with regards to his notion of the case-quintain dilemma. It also seems very well suited for analysis of situational meaning-making, as per Suchman (1987), Lave and Wenger (1991), Knorr Cetina (2001) and symbolic interactionism at large.
Situational analysis
Situational analysis originates from Strauss’s social worlds/arenas/negotiations framework. From Clarke (2003: 554):
Building on and extending Strauss’s work, situational analyses offer three main cartographic approaches:
- situational maps that lay out the major human, nonhuman, discursive, and other elements in the research situation of concern and provoke analyses of relations among them;
- social worlds/arenas maps that lay out the collective actors, key nonhuman elements, and the arena(s) of commitment within which they are engaged in ongoing negotiations, or mesolevel interpretations of the situation; and
- positional maps that lay out the major positions taken, and not taken, in the data vis-à-vis particular discursive axes of variation and difference, con cern, and controversy surrounding complicated issues in the situation.
Clarke (2003: 560) identifies the main purpose of situational data as a way of “opening up” the data, figuing out where and how to enter:
Although they may do so, a major and perhaps the major use for them is “opening up” the data –— interrogating them in fresh ways. As researchers, we constantly confront the problem of “where and how to enter.” Doing situational analyses offers new paths into the full array of data sources and lays out in various ways what you have to date. These approaches should be considered analytic exercises — constituting an ongoing research “work out” of sorts—well into the research trajectory. Their most important outcome is provoking the researcher to analyze more deeply.
She emphasizes that this is meant to stimulate thinking, and should always be paired with comprehensive memoing before, during and after situation mapping excercises.
A key feature that I think is invaluable is the ability to uncover the sites of silence, or the things that I as a researcher suspect are there but are not readily visible in my evidence. Situational analysis is useful for drawing out the thousand pound gorillas in the room that no one wants to talk about, and is therefore important for identifying things to address during continual data collection, as is one of the (often ignored) central pillars of grounded theory:
The fourth and last caveat is perhaps the most radical. As trained scholars in our varied fields, usually with some theoretical background, we may also suspect that certain things may be going on that have not yet explicitly appeared in our data. As ethically accountable researchers, I believe we need to attempt to articulate what we see as the sites of silence in our data. What seems present but unarticulated? What thousand-pound gorillas are sitting around in our situations of concern that nobody has bothered to mention as yet (Zerubavel 2002)? Why not? How might we pursue these sites of silence and ask about the gorillas without putting words in the mouths of our participants? These are very important directions for theoretical sampling. That is, the usefulness of the approaches elucidated here consists partly in helping the researcher think systematically through the design of research, espe cially decisions regarding future data to collect.
The process is remarkably similar to the brainstorming excercise I did with Costis one time. Starts by articulating the actors involved, their roles and relationships, the things they do, the things they make.
The goal here is to lay out as best one can all the human and nonhuman elements in the situation of concern of the research broadly conceived. In the Meadian sense, the questions are: Who and what are in this situation? Who and what matters in this situation? What elements “make a difference” in this situation?
After jotting these down on a canvas or whiteboard, arrange them into more concrete categories. In each category, refer to examples or specific instances. Concepts can occur in multiple categories. By arranging these concepts, grouping them thematically, spatially, and through relationships and associations with arrows or lines (presumably, with labels indicating the nature of these relationships), this provides the researcher with a viable way of unlocking new pathways to think through their data. I imagine this will be especially helpful when arranging or re-arranging the coding tree.
Clarke (2003: 569-570) also dedicates a few paragraphs to relational forms of analysis using this technique:
Relations among the various elements are key. You might not think to ask about certain relations, but if you do what I think of as quick and dirty relational analyses with the situational map, they can be revealing. The procedure here is to take each element in turn and think about it in relation to each other element on the map. One does this by circling one element and mentally or literally drawing lines, one at a time, between it and every other ele ment on the map and specifying the nature of the relationship by describing the nature of that line.
You could highlight (in blue perhaps) that organization’s perspectives on all the other actors to see which actors are attended to and which are not, as well as the actual contents of the organization’s discourses on its “others.” Silences can thus be made to speak.
Clarke (2003: 569) On how to generate memos from situational analysis:
Each map in turn should lead to a memo about the relations diagrammed. At early stages of analysis, such memos should be partial and tentative, full of questions to be answered about the nature and range of particular sets of social relations, rather than being answers in and of themselves.
Clarke (2003: 570) addresses the energy or mood required to do this kind of work, which is reminiscient of Yin’s (2014: 73-74) description of the “mental and emotional exhaustion at the end of each fieldwork day, due to the depletion of ‘analytic energy’ associated with being attention on your toes.” Interesting that the mention of “freshness” is brought up in the context of Glaser’s tabla rasa approach to grounded theory.
As a practical matter, doing the situational map and then the relational analyses it organizes can be tiring and/or anxiety producing and/or elating. Work on it until you feel stale and then take a break. This is not the same order of work as entering bibliographic citations. The fresher you are, the more you can usually see. Glaser (1978: 18-35) cautions against prematurely discussing emergent ideas –— that we might not necessarily benefit from talking about everything right away but rather from reflection — and memoing. I strongly agree, especially about early even if quick and dirty memoing. But we all must find our own ways of working best. For most, the work of this map occurs over time and through multiple efforts and multiple memos.
Clarke (2003) refers to Shim (2000) as an exemplary case of situational analysis in action.
QDA software and tooling
Weitzman (2000) provides an overview of software and qualitative research, including a minihistory up to the year 2000 when the chapter was published.
Describing the first programs specifically designed for analysis of qualitative data, Weitzman (2000: 804) writes:
Early programs like QUALOG and the first versions of NUDIST reflected the state of computing at that time. Researchers typically accomplished the coding of texts (tagging chunks of texts with labels — codes — that indicate the conceptual categories the researcher wants to sort them into) by typing in line numbers and code names at a command prompt, and there was little or no facility for memoing or other annotation or markup of text.9 In comparison with marking up text with coloured pencils, this felt awkward to many researchers. And computer support for the analysis of video or audio data was at best a fantasy.
This history if followed by a sober account of what software can and can not do in qualitative research, as well as affirmation and dismissed of hopes and fears. Very reminiscient of Huggett (2018).
Writing
Richardson (2000), who frames writing as a method of inquiry. Much of this text deals with the history of social science writing and the impact of postmodernism, which I’m honestly not really that interested in, at this point at least. She does list a series of very broad and somewhat obvious techniques, which I do not really find that helpful.
From Charmaz and Mitchell (1996: 286):
For the most part, social science researchers are not expected to speak, and if they do, we need not listen. While positivism and postmodernism claim to offer open forums, both are suspicious of authors’ voices outside of prescribed forms. Extremists in both camps find corruption in speech. It lacks objectivity and value neutrality in the positivist idiom; it expresses racist, Eurocentric, and phallocentric oppression in the postmodern view. Our point is this: There is merit in humility and deference to subjects’ views, and there is merit in systematic and reasoned discourse. But there is also merit in audible authorship.
Charmaz (2000: 526-528) illustrates the impact of various writing techniques on conveying the findings. Specifically, she emphasizes techniques that draw the reader into the story, and which acknowledge the role of the researcher in forming the narrative.
Although not explicitly about writing Charmaz’s (2000) perspective, this relates to Ellis and Bochner (2000), which is one of the best pieces of academic work I’ve ever read. In this book section, the authors illustrate the importance of capturing and representing perspective in addition to the plain facts, which helps to inform the reader about the real impacts and implications of a topic to those who actually experience it. I am able to do this work justice in these notes.
Writing techniques
These are some writing strategies to get ideas flowing. Both Saldaña (2016) and Charmaz (2014) actually refer to these as “pre-writing” techniques that occur as a researcher is trying to extend their observations and their notes into broader forms. However, I tend to consider writing as a more open-ended aspect of research, whose borders are less defined, so I am referring to these as writing techniques in a general sense.
Analytical storylining
Following Charmaz (2010: 201), Saldaña (2016: 287) suggests using processual words that suggest the unfolding of events through time that “reproduce the tempo and mood of the [researched] experience”. However, he cautions that this is not suitable for every situation, and to be aware of how this framing differs from positivist notions of cause and effect.
One thing at a time
Saldaña (2016: 287) suggests documenting each category, theme or concept one at a time as a wat of disentangling very interrelated ideas. This helps maintain focused as a writer, and may help maintain a more focused reading experience too.
Writing about methods
Saldaña (2016: 284-286) addresses some common tendencies and challenges with regards to writing about qualitative coding practices:
Researchers provide brief storied accounts of what happened “backstage” at computer terminals and during intensive work sessions. After a description of the participants and the particular data collected for a study, descriptions of coding and analysis procedures generally include: references to the literature that guided the analytic work; qualitative data organization and management strategies for the project; the particular coding and data analytic methods employed and their general outcomes; and the types of CAQDAS programs and functions used. Some authors may include accompanying tables or figures that illustrate the codes or major categories constructed. Collaborative projects usually explain team coding processes and procedures for reaching intercoder agreement or con-sensus. Some authors may also provide brief confessional anecdotes that highlight any analytic dilemmas they may have encountered.
Saldaña (2016: 286) then suggests emphasizing for readers through introductory phrases and italicized text the major outcomes of the analysis to help provide guidance for readers to grasp the headlines of your research story. For example:
- “The three major categories constructed from transcript analysis are …”
- “The core category that led to the groundedtheory is ..,” “The key assertion of this study is …”
- “A theory I propose is …”
Clustering
From Charmaz (2014: 184-187) Essentially comprises visually mapping the core and peripheral ideas, similarly to Saldaña’s “big three” technique and Clarke’s situational analysis. Charmaz (2014) encourages experimentation, free from commitment to a specific arrangement.
Charmaz (2014: 185) provides a series of directions that may help effectively implement this technique:
- Start with the main topic or idea at the center
- Work quickly
- Move out from the nucleus into smaller subclusters
- Keep all related material in the same subcluster
- Make the connections clear between each idea, code, and/or category
- Try several different clusters on the same topic
- Use clustering to play with your material
Clustering enables you to define essentials. It allows for chaos and prompts you to create paths through it.
Freewriting
From Charmaz (2014: 186-188). Simply putting fingers to keeboard and writing for 8-10 minutes straight. Write to and for yourself. Permit yourself to write badly. Don’t attend to grammar, organization, logic, evidence or audience. Write as though you are talking.
References
Footnotes
The term refers to a medieval jousting target: see https://en.wikipedia.org/wiki/Quintain_(jousting)↩︎
Though Yin (2014: 40-444) is dismissive of such use of the term “sample” since he sees case study research as only generalizable to similar situations, and not to a general population from which a sample is typically said to be drawn. I agree with this focus on concrete situations over Stake’s prioritization of theory-building as an end unto itself.↩︎
Charmaz uses the term “giving voice” in this specific context. I’m not sure if this is meant to represent Strauss and Corbin’s attitude, and whether this is an accurate representation on their views, but in my mind this should be framed as elevating, amplifying or re-articulating respondents’ voices (and this is a tenet of constructivist grounded theory in general, which derives from Charmaz). My take diverges from the position that we “give voice” to respondents in that it acknowledges (1) that the voices are already there, (2) that respondents are in fact giving us their voices, and (3) that the researcher plays an active editorial role, transforming the respondents’ elicitations into a format that is more amenable to analysis.↩︎
Very much in line with the pragmatist turn of the late ’90s and early ’00s, as also documented by Lucas (2019: 54-57) in the context of archaeological theory, vis-a-vis positivism, postmodernism, and settling on a middle ground between them.↩︎
open-ended questions, asked of all participants in the same order.↩︎
This seems reminiscient of procedures planned for WP1 and WP2 from my FWO grant application.↩︎
Contrast this with Richardson (2000), who identified several kinds of notes:
- Observation notes: Concrete and detailed, accurate renditions of things I see, hear, feel and taste, and so on. Remain close to the scene as experienced through the senses.
- Methodological notes: Messages to self about how to collect data — who to talk to, what to wear, when to get in touch.
- Theoretical notes: Hunches, hypotheses, poststructuralist connections, critiques of what I’m doing/thinking/seeing, keeping me from getting hooked on one view of reality.
- Personal notes: Uncensored feeling statements about the research, the people I’m talking to, my doubts, anxieties, pleasures. These can also be great sources of hypotheses (for example, if I am feeling anxious, others I’m speaking with might feel the same way, which provides a string to tug on).
This caught my eye since its the same approach as that adopted by qc!↩︎