Methodology notes
This document is an overview of methodological topics and concerns. It is a place where I think through and justify my methodological decisions, and identify the methods and procedures through which I implement them.
Significant Concepts and Frameworks
Multicase Studies
These notes describe the features, affordances and limitations of case study research, and articules factors correspoding with variable kinds of case studies.
I do notice a distinction between two schools of thought, which seem to be spearheaded by Stake and Yin. I generally favour Stake’s flexible approach, and it seems well aligned with other methodological works I’ve been reading (e.g. Abbott 2004; Charles C. Ragin and Becker 1992).
Stake’s Approach
In case-study research, cases represent discrete instances of a phenomenon that inform the researcher about it. The cases are not the subjects of inquiry, and instead represent unique sets of circumstances that frame or contextualize the phenomenon of interest (Stake 2006: 4-7).
Cases usually share common reference to the overall research themes, but exhibit variations that enable a researcher to capture different outlooks or perspectives on matters of common concern. Drawing from multiple cases thus enables comprehensive coverage of a broad topic that no single case may cover on its own (Stake 2006: 23). In other words, cases are contexts that ascribe particular local flavours to the activities I trace, and which I must consider to account fully for the range of motivations, circumstances and affordances that back decisions to perform activities and to implement them in specific ways.
Moreover, the power of case study research derives from identifying consistencies that relate cases to each other, while simultaneously highlighting how their unique and distinguishing facets contribute to their representativeness of the underlying phenomon. Case study research therefore plays on the tensions that challenge relationships among cases and the phenomenon that they are being called upon to represent (C. C. Ragin 1999: 1139-1140).
Stake (2006: 4-6) uses the term quintain1 to describe the group, category or phenomenon that bind together a collection of cases. A quintain is an object, phenomenon or condition to be studied – “a target, not a bull’s eye” (Stake 2006: 6). “The quintain is the arena or holding company or umbrella for the cases we will study” (Stake 2006: 6). The quintain is the starting point for multi-case research.
1 The term refers to a medieval jousting target: see https://en.wikipedia.org/wiki/Quintain_(jousting)
According to Stake (2006: 6):
Multicase research starts with the quintain. To understand it better, we study some of its single cases — its sites or manifestations. But it is the quintain we seek to understand. We study what is similar and different about the cases in order to understand the quintain better.
Stake (2006: 8) then goes on:
When the purpose of a case is to go beyond the case, we call it an “instrumental” case study When the main and enduring interest is in the case itself, we call it “intrinsic” case study (Stake 1988). With multicase study and its strong interest in the quintain, the interest in the cases will be primarily instrumental.
Abbott’s (2004: 22) characaterization of Small-N comparison is very reminiscient of Stake’s (2006) account of the case-quintain dialectic:
Small-N comparison attempts to combine the advantages of single-case analysis with those of multicase analysis, at the same time trying to avoid the disadantages of each. On the one hand, it retains much information about each case. On the other, it compares the different cases to test arguments in ways that are impossible with a single case. By making these detailed comparisons, it tries to avoid the standard critcism of single-case analysis — that one can’t generalize from a single case — as well as the standard criticism of multicase analysis — that it oversimplifies and changes the meaning of variables by removing them from their context.
It should be noted that case study research limits my ability to define causal relationships or to derive findings that may be generalized across the whole field of epidemiology. This being said, case study research allows me to articulate the series of inter-woven factors that impact how epidedemiological researchers coordinate and participate in data-sharing initiatives, while explicitly accounting for and drawing from the unique and situational contexts that frame each case.
Stake (2006: 23) recommends selecting between 4-10 cases and identifies three main criteria for selecting cases:
- Is the case relevant to the quintain?
- Do the cases provide diversity across contexts?
- Do the cases provide good opportunities to learn about complexity and contexts?
For qualitative fieldwork, we will usually draw a purposive sample of cases, a sample tailored to our study; this will build in variety and create opportunities for intensive study (Stake 2006: 24).2
2 Though Yin (2014: 40-444) is dismissive of such use of the term “sample” since he sees case study research as only generalizable to similar situations, and not to a general population from which a sample is typically said to be drawn. I agree with this focus on concrete situations over Stake’s prioritization of theory-building as an end unto itself.
Yin’s Approach
According to Yin (2014: 16), “a case study is an empirical inquiry that investigates a contemporary phenomenon (the”case”) in depth and within its real-world context, especially when the boundaries between phenomenon and context may not be clearly evident.”
He goes on to document some features of a case study: “A case study inquiry copes with the technically distinctive situation in which there will be many more variables of interest than data points, and as one result relies on multiple sources of evidence, with data needing to converge in a triangulating fashion, and as another result benefits from the prior development of theoretical propositions to guide data collection and analysis.” (Yin 2014: 17)
Yin (2014) is more oriented toward what he refers to as a realist perspective, which he pits against relativist and interpretivist perspectives (used interchangably, it seems), and which I might refer to as constructivist. He characterizes relativist perspectives as “acknowledging multiple realities having multiple meanings, with findings that are observer dependent”. His prioriting of a realist approach corresponds with the analysis by Yazan (2015), who compared Yin with Stake and Merriam. According to Yazan (2015: 137), Yin evades making statements about his epistemic commitments, and is characterized as post-positivist.
Yin (2014) is very concerned with research design in case study research He posits that, in a colloquial sense, “a research design is a logical plan for getting from here to there, where here may be defined as the initial set of questions to be answered, and there is some set of conclusions (answers) about these questions.” (Yin 2014: 28)
Yin distinguishes between a research design and a work plan. A research design deals with a logical problem, whereas a work plan deals with a logistical problem. Seems reminiscient of Brian Cantwell Smith’s distinction between skeletons and outlines.
Yin lists five components of a research design:
- A case study’s questions;
- its propositions, if any;
- its unit(s) of analysis;
- the logic linking the data to the propositions; and
- the criteria for interpreting the findings.
Interestingly, I have been instinctively following these steps, and am currently hovering somewhere between components 3 and 4, while dipping back to 2 once in a while too.
The problem of defining the unit of analysis is salient to me right now. According to Yin (2014: 32), the unit of analysis may change as the project progresses, depending on initial misconceptions (he uses the example of a unit of analysis changing from neighbourhoods to small groups, as contextualized by the socio-geographical entity of the neighbourhood, which is laden with issues of class, race, etc). In my own situation, the unit of analysis may hover between the harmonization initiative, the people, activities or infrastructures that make it work.
In the section on criteria for interpreting the findings, Yin emphasizes the role of rival theories, which is akin to a concern with falsifiability as a means of validating truth claims, and which betrays his positivist leanings. This may be compared with Stake’s emphasis on triangulation, which is more concerned with internal cohesiveness. Similarly, Yin cites Corbin and Strauss regarding the role of theory or theoretical propositions in research design, which similarly reveals a concern with rigorous upfront planning and strict adherence to research design as a key aspect of deriving valid findings.
Regarding generalizability, Yin (2014: 40-41) states that “Rather than thinking about your case as a sample, you should think of it as the opportunity to shed empirical light about some theoretical concepts or principles, not unlike the motive of a laboratory investigator in conceiving of and then conducting a new experiment.” He goes on to state that case studies tend to strive for analytic generalizations that go beyond the specific case that has been studied, and which apply to other concrete situations rather than just abstract theory building.
Preparing to select case study data
Yin (2014: 72-23) identifies five desired attributes for collecting case studt data:
- Ask good questions — and interpret answers fairly.
- “As you collect case study evidence, you must quickly review the evidence and continually ask yourself why events or perceptions appear as they do.” (73)
- A good indicator of having asked good questions is mental and emotional exhaustion at the end of each fieldwork day, due to the depletion of “analytic energy” associated with being attention on your toes. (73-74)
- Be a good “listener” not trapped by existing ideologies or preconceptions.
- Sensing through multiple modalities, not just spoken words.
- Also subtext, as elicited through choices of terms used, mood and affective components. (74)
- Stay adaptive, so that newly encountered situations can be seen as opportunities, not threats.
- Remember the original purpose but willing to adapt to unanticipated circumnstances. (74)
- Emphasize balancing adaptability with rigour, but not with rigidity. (75)
- Have a firm grasp of what is being studied, even when in an exploratory mode.
- Need to do more than merely record data, but interpret information as they are being collected and to know immedately whether there are contradictions or complementary statements to follow-up on. (75-76)
- Avoid biases of being sensitive to contrary evidence, also knowing how to conduct research ethically.
- Maintain strong professional competence, including keeping up with related research, ensuring accuracy, striving for credibility, and knowledging and mitigating against bias.
Yin advocates for adoption of case study protocols. He provides an example of a table of contents for case study protocols, which generally comprise four sections:
- Overview of the case study
- Data collection procedures
- Data collection questions
- Guide for the case study report
Triangulation
Triangulation is a process of gaining assurance. Also sometimes called crystallization.
“Each important finding needs to have at least three (often more) confirmations and assurances that key meanings are not being overlooked.” (Stake 2006: 33) Triangulation is a process of repetitous data gathering and critical review of what is being said. (Stake 2006: 34)
What needs triangulation? (Stake 2006: 35-36)
- If the description is trivial or beyond question, there is no need to triangulate.
- If the description is relevant and debatable, there is much need to triangulate.
- If the data are critical to a main assertion, there is much need to triangulate.
- If the data are evidence for a controversial finding, there is much need to triangulate.
- If a statement is clearly a speaker’s interpretation, there is little need to triangulate the quotation but not its content.
Stake (2006: 37) cites Denzin (1989) who highlighted several kinds of triangulation, leading to a few advisories:
- Find ways to use multiple rather than single observers of the same thing.
- Use second and third perspectives, i.e. the views of teachers, student and parents.
- Use more than one research method on the same thing, i.e. document review and interview.
- Check carefully to decide how much the total description warrants generalization.
- Do your conclusions generalize across other times or places?
- Do your conclusions about the aggregate generalize to individuals?
- Do findings of the interaction among individuals in one group pertain to other groups?
- Do findings of the aggregate of these people generalized to a population?
Cross-Case Analysis Procedure
Stake (2006: Chapter 3) lays out a procedure for deriving synthetic findings from data collected across cases. He frames this in terms of a dialectic between cases and quintains. He identifies three tracks (Stake 2006: 46):
- Track 1: Maintains the case findings and the situationality.
- Track 2: Merges similar findings, maintaining a little of the situationality.
- Track 3: The most quanitative track, shifts the focus from findings to factors.
According to Stake, case reports should be created independently and then brought together by a single individual when working in a collaborative project. In keeping with the case-quintain dialectic, this integration must involve strategically putting the cases aside and bringing them back in to identify convergences and divergences, similarities and differences, normalitities and discrepancies among them.
There is some detailed discussion about different kinds of statements, i.e. themes, findings, factors and assertions, but I find this a bit too much detail for me to get at at this point in mymethodological planning. In general though, Stake documents a process whereby an analyst navigates back and forth between the general and the situational, presenting tentativr statements that are shored up, modified or discarded through testing compatability of the evidence across cases.
Some practical guidance
Stake (2006: 18-22) provides a detailed and realistic overview of common challenges involved in collaborative qualitative research. This could be handy in future work when planning a multicase project involving multiple researchers.
Stake (2006: 29-33) provides guidance on how to plan and conduct interviews in multicase research, including a series of helpful prompts and questions to ask yourself while designing the interview. One thing that stands out is his recommendation that an interview should be more about the interviewee than about the case. It’s necessary to find out about the interviewee to understand their interpretations, but what they reveal about the quintain is more important.
On page 34, Stake (2006) also provides some practical tips for documenting and storing data, after Huberman et al. (1994).
Stake (2006: Chapter 4) includes a chapter on procedures for reporting the findings, and I may return to this later on once I need to initiative this phase of work. It addresses concerns about how to articulate comparisons, concerns about generalization, and how to handle advocacy based on findings.
See Stake (2006) Chapter 5 for a step-by-step overview of a multicase study analysis. The rest of the volume after that includes three very detailed examples from his own work.
Enacting the findings
Stake’s (2010: 122) prioritizes doing research to understand something or to improve something, and I generally agree with his rationalization; research helps reframe problems and establish different decision options.
Single case studies
Stake (2000) is concerned with identifying what can be learned from a single case. He (2000: 437) identifies three kinds of cases:
- Intrinsic case studies as being driven by a desire to understand the particular case.
- Instrumental case studies are examined “mainly to provide insight into an issue or to redraw a generalization.”
- Collective case studies “investigate a phenomenon, population or general condition”.
Stake (2000) frames case research around a tension between the particular and the general, which echoes the case-quintain dilemma he described in (Stake 2006: 4-6).
Grounded theory
This study follows an abductive qualitative data analysis framework to construct theories founded upon empirical evidence, which relates to, but is distinct from, grounded theory. Grounded theory consists of a series of systematic yet flexible guidelines for deriving theory from data through continuous and reiterative engagement with evidence (Charmaz 2014). My approach draws from what Charmaz (2014: 14-15) calls the “constellation of methods” associated with grounded theory that are helpful for making sense of qualitative data. However, it differs from grounded theory as it is traditionally conceived in that I come to the project with well-defined theoretical goals and did not make a concerted effort to allow the theory to emerge through the analytical process.
Proponents of a more open-ended or improvised approach, as grounded theory was originally applied, argue that researchers should be free to generate theories in accordance with their own creative insights and their intimate engagements with the evidence. We can evaluate the quality of such work in terms of the dialogical commitments between researchers and their subjects, and between researchers and those who read their work (Glaser and Strauss 1967: 230-233).
Others view grounded theory more as a means of clarifying and articulating phenomena that lie below the surface of observable social experiences (Strauss and Corbin 1990; Kelle 2005). Proponents of this approach are very concerned with ensuring that concepts, themes and theories are truly represented in and limited by the data, and therefore prioritize adherence to systematic validation criteria to ensure the soundness of their claims.
Another view, known as constructivist grounded theory, most resembles my own approach. It recognizes that it is impossible to initiate a project without already holding ideas regarding the phenomena of interest, and that the ways that one ascribes meanings to the data represent already established mindsets or conceptual frameworks (Charmaz 2014). It encourages reflection on the researcher’s standpoint as they pursue an abductive approach rooted in their own preconceptions (Mills, Bonner, and Francis 2006).
Data Collection
Interviews
structured, semi-structured
lunk to more detailed transcription protocol
See (Yin 2014: 110-113)
See Becker (1998)
See Fontana and Frey (2000)
Transcribing
This section describes how I transcibe interviews and accounts for the decisions to encode certain things and not others. It goes on to explains the procedures for transcribing spoken dialog into textual formats, including the notation applied to encode idiosyncratic elements of conversational speech.
Check out Silverman (2000), who writes about the nuanced challenges of working with and between verbal and textual media, and what this means for transcription.
Transcript notation
Derived from the transcription protocol applied for the E-CURATORS project.
Cleaning audio
To clean the audio:
- I select a clip that is representative of a single source of background noise, and then filter that wavelength throughout the entire audio file.
- After selecting the clip, go to
Effect >> Noise Reduction
and selectGet Noise Profile
, then pressOK
. - Close the noise reduction menu, select the entire range of audio using the keyboard shortcut
Command + A
. - Then go back to the noise reduction window (
Effect >> Noise Reduction
) to apply the filter based on the noise profile identified for the noisy clip. - Export the modified audio file to the working directory (
File >> Export >> Export as .WAV
). - Use
ffmpeg
to replace the dirty audio track with the clean one:
ffmpeg -i dirty.mp4 -i clean.wav -c:v copy -map 0:v:0 -map 1:a:0 clean.mp4
Observations
See Angrosino and Mays de Pérez (2000)
Field notes
- See (Yin 2014: 124-125)
Recording video
- affordances
QDA
My QDA processes are most influenced by Kathy Charmaz and Johnny Saldaña, as well as the practical experiences instilled during my PhD and while working on E-CURATORS.
Coding
These notes are largely derived from my reading of Saldaña (2016), provides a practical overview of what coding entails and specific methods and techniques.
Coding as component of knowledge construction:
- Coding is an intermediate step, “the”critical link” between data collection and their explanation or meaning” (Charmaz (2001), as quoted in Saldaña (2016): 4)
- “coding is usually a mixture of data [summation] and data complication … breaking the data apart in analytically relevant ways in order to ead toward further questions about the data” (Coffey and Atkinson (1996): 29-31, as quoted and edited by Saldaña (2016): 9)
- This relates to the paired notions of decodng when we reflect on a passage to decipher its core meaning, and encoding when we determine its appropriate code and label it (Saldaña 2016: 5).
- Coding “generates the bones of your analysis. … [I]ntegration will assemble those bones into a working skeleton” (Charmaz (2014): 113, quoted in Saldaña (2016): 9)
- To codify is to arrange things in a systematic order, to make something part of a system or classification, to categorize
- What I sometimes refer to as arranging the code tree
- What Saldaña (2016) refers to as categories, I tend to refer to as stubs
- Categories are arranged into themes or concepts, which in turn lead to assertions or theories
Pre-coding techniques: - Data layout - Separation between lines or paragraphs may hold significant meaning - Putting interviewer words in square brackets or capital letters - Semantic markup - Bold, italics, underline, highlight - Meant to identify “codable moments” worthy of attention (Boyatzis (1998), as referenced in Saldaña (2016): 20) - Relates to Saldaña (2016): 22’s prompt: “what strikes you?” - Preliminary jottings - Tri-column exercise with the text on the left, first impression or preliminary code in the middle, and code on the right, after Liamputtong and Ezzy (2005): 270-273.
Analysis
Preliminary analyses
Yin (2014: 135-136 5) identifies various strategies for analyzing case study evidence.
A helpful starting point is to “play” with your data. You are searching for patterns, insights, or concepts that seem promising. (Yin 2014: 135)
Citing Miles and Huberman (1994), Yin (2014) lists a few strategies at this playful stage:
- Juxtaposing data from different interviews
- Putting information into different arrays
- Making a matrix of categories and placing the evidence within them
- Tabulating the frequency of different events
- Putting information in chronological order or using some other temporal scheme
Yin (2014: 135) also emphasizes memo-writing as a core strategy at this stage, citing Corbin and Strauss (2014). These memos should include hints, clues and suggestions that simply put into writing any preliminary interpretation, essentially conceptualizing your data. He uses the specific example of shower thoughts.
Analytical strategies and techniques
Yin (2014: 136-142) then goes on to describe four general strategies:
- Relying on theoretical propositions
- Working your data from the “ground up”
- Developing a case description
- Examining plausible rival explanations
Yin (2014: 142-168) then goes on to describe five analytical techniques:3
3 I wonder: would Abbott (2004) call these heuristics?
- Pattern matching
- Explanation building
- Time-series analysis
- Logic models
- Cross-case synthesis
Ryan and Bernard (2000) describe various analysis techniques for analyzing textual elicitations in structured and codified ways.
Weitzman (2000) provides an overview of software and qualitative research, including a minihistory up to the year 2000 when the chapter was published.
Describing the first programs specifically designed for analysis of qualitative data, Weitzman (2000: 804) writes:
Early programs like QUALOG and the first versions of NUDIST reflected the state of computing at that time. Researchers typically accomplished the coding of texts (tagging chunks of texts with labels — codes — that indicate the conceptual categories the researcher wants to sort them into) by typing in line numbers and code names at a command prompt, and there was little or no facility for memoing or other annotation or markup of text.4 In comparison with marking up text with coloured pencils, this felt awkward to many researchers. And computer support for the analysis of video or audio data was at best a fantasy.
4 This caught my eye since its the same approach as that adopted by qc!
This history if followed by a sober account of what software can and can not do in qualitative research, as well as affirmation and dismissed of hopes and fears. Very reminiscient of Huggett (2018).
Statistical methods
- crosstab
Writing
See Richardson (2000), who frames writing as a method of inquiry.