The process of analysis and interpretation of research data is dialectic and not linear. © iStockphoto.com/BraunS AUTHOR’S NOTE: Parts of this chapter are adapted from the following: Hesse-Biber, S. N., & Leavy, P. (2004). Analysis, interpretation, and the writing of qualitative data. In S. N. Hesse-Biber & P. Leavy (Eds.), Approaches to qualitative research: A reader on theory and practice. New York: Oxford University Press. Data analysis and interpretation are interrelated. You will find yourself analyzing and interpreting your data as your qualitative project proceeds. This process requires that you be open to new ideas in your data, visiting and revising your analysis and interpretation as your study proceeds. What Is Qualitative Data Analysis? In the following passage, ethnographer Michael Agar (1980) distinguishes between analysis and interpretation: In ethnography … you learn something (“collect some data”), then you try to make sense out of it (“analysis”), then you go back and see if the interpretation makes sense in light of new experience (“collect more data”), then you refine your interpretation (“more analysis”), and so on. The process is dialectic, not linear. (p. 9) The process of turning your observations into what Harry Wolcott (1994) terms “intelligible accounts” (p. 1) calls forth reflection on how you might answer the following questions (Hesse-Biber & Leavy, 2004): How do you know if you focused on the major themes contained in your interview material or ethnographic fieldwork account? Do the categories of analysis you gathered make sense? What type of analysis should you proceed with? Should you conduct a descriptive study or venture beyond descriptive findings with your own interpretation? How much interpretation should you conduct? To what endpoint? Steps in Analyzing and Interpreting Qualitative Data Although I provide some steps to consider as you proceed with the analysis and interpretation of your qualitative data, you should not get the idea that qualitative analysis and interpretation proceed in a cookbook fashion. There is no one right way to go about this process. C. Wright Mills (1959) noted that qualitative analysis is, after all, “intellectual craftsmanship” (cited in Tesch, 1990, p. 96). As Renata Tesch (1990) notes, “Qualitative analysis can and should be done artfully, even ‘playfully,’ but it also requires a great amount of methodological knowledge and intellectual competence” (p. 97). Norman K. Denzin (2000) posits that there is an “art of interpretation”: “This may also be described as moving from the field to the text to the reader. The practice of this art allows the field-worker-as-bricoleur … to translate what has been learned into a body of textual work that communicates these understandings to the reader” (p. 313). With these caveats in mind, I break down data analysis and interpretation as a series of steps, beginning with the collection of your qualitative data. Step 1. Data Preparation Think about what data you are going to analyze and interpret and whether these data are going to provide you with an understanding of your research question. If you are conducting interviews or focus groups, for example, you might want to make a transcript of your data. You will probably need to enter and store these data into a database of some type. You can print out copies of what you have entered in your database and carefully begin to read through and perhaps correct any data entry errors. The transcription process is not passive. How you collect your data is crucial to analysis and interpretation. If you are conducting an interview or focus group, for example, several key issues arise in terms of how you will collect this data: Will you videotape or audiotape your interview session or use some other recording device? Will you transcribe the entire data session? Will you only summarize key passages or quotes? Will you select only those passages you perceive to be key research issues? The answers to such questions may seem clear-cut—of course you will transcribe all your data. However, if you are a market researcher, you may want to sample only those passages from a given transcript that shed a positive light on the marketing of a given product. Will you transcribe all types of data you collect (e.g., all verbal data including laughter, pauses, emotions such as sadness or anger, and nonverbal data such as hand gestures)? Who will transcribe your data? What transcription format will you use? How will you represent a participant’s voice, nonverbal information, and so on? How a researcher answers these questions is often dictated by his or her research question as well as the type of theoretical framework he or she holds regarding the interview as a process of meaning making. A positivist might dispense with some of these questions, opting to view the transcription process as a simple translation from the oral to the written language, something that can be done by almost anyone who can listen to the tape and has good typing skills, for example. What the positivist transcribes is regarded as “the truth,” and each transcription is considered to contain a one-to-one correspondence between what is said orally and the printed word. Those with a more interpretative viewpoint might not view the transcription process as so transparent (see Mishler, 1991). In fact, they would stress the importance of the researcher’s point of view and the researcher’s influence on the transcription process itself. Those researchers with a more discourse analytic or linguistic theoretical framework will be especially aware of the lack of transparency in the translation process by noting the importance of multiple levels of meaning within the transcription process that include such things as pauses, the way in which something is said, and the nonverbal cues used by a participant. Feminist researchers such as Marjorie Devault (2004) are particularly aware of the importance of listening to the data when transcribing interviews, especially from those groups whose everyday lives are rendered “invisible” by the dominant society. Devault notes the significance of listening to those moments in the interview where the interviewee is tentative or says “you know what I mean?” She suggests that these are the very moments where the researcher is able to unearth hidden meanings of interviewees whose lives and language are often overshadowed by the dominant discourse. She offers the following wisdom she has garnered in conducting interviews with women regarding the daily activities they perform in their homes, especially the work they perform in feeding their families: The words available often do not fit, women learn to “translate” when they talk about their experiences. As they do so, part of their lives “disappears” because they are not included in the language of the account. In order to “recover” these parts of women’s lives, researchers must develop methods for listening around and beyond words. (Devault, 2004, pp. 233–234) Devault (2004) notes that tentative words like “you know?” might be discarded in transcribing one’s data, but in fact are the very moments where “standard” vocabulary is inadequate and where a participant tries to speak from experience and finds language wanting. Transcribing research data is interactive and engages the researcher in the process of deep listening, analysis, and interpretation. Transcription is not a passive act but instead provides the researcher with a valuable opportunity to actively engage with his or her research material from the beginning of data collection. You can, for example, begin to jot down a short memo about a given passage in the interview you found very insightful. You might want to begin the process of labeling certain passages you found captured some important meanings in your data. The following are some tips you can think about as you begin transcribing your data. Tips on Transcribing Your Data The process of transcribing an interview may at first seem quite straightforward; you just listen to the participant’s words and translate them to the printed word. Yet in the very process of transcribing, your participant’s words are filtered through you. If you hire someone else to do your transcription, those individuals become the filters of meaning. The transcription process is a critical element in the meaning-making process of data collection and analysis. There are a number of transcription decisions that can serve to disrupt the flow of participant’s words and their intended meaning. Consider the following decisions that may alter the intended meaning of your participant’s words: Did you transcribe every word of your interview, or did you pick and choose the sentences or points that you found most important? Did you listen to the pauses in an interview and note them? Did you listen for motions or sounds that might be insightful when reading your transcript? Did you listen from a place where you were not interrupted by your immediate environment? Did you listen to the whole interview in parts with many breaks or straight through? While you are listening, you might think of this listening as an opportunity to think about what is going on with this interview. What ideas emerge from this listening? Consider memoing along the way to capture ideas on the fly about this specific interview. Step 2. Data Exploration Creating Metaphors, Comparing and Contrasting, and Clustering Your Data. There are a number of simple yet creative ways you can begin to uncover meaning in your textual data. You can start by creating metaphors about what you think might be going on in your data. You might also compare and contrast two interviews and then start to cluster those interviews you feel are similar or different, asking yourself what things, ideas, or factors make for similarity or difference among those students you interviewed. One thing I found when doing this exercise is that there was a distinct set of differences among African American students who grew in white communities in contrast to those students who grew up in predominately mixed race or African American communities. One big difference was that those African American women growing up in white communities often tended to make friends with and hang out with their white friends only and found it difficult to be accepted by the African American women at their college, whose friendship group consisted primarily of African American students. Photo 11.2 The author came up with the metaphor of a bridge to describe a group of African American women students who saw their identity as one of a “bridge builder” between distinct racial or ethnic groups within their college. © iStockphoto.com/MartinDimitrov The metaphor of a bridge helped me to understand African Americans’ identity role at a predominately white college. They came to see a core characteristic of their identity as a bridge that connected distinct groups. Extending this metaphor allowed me to understand their own disappointment toward the lack of diversity policies at their university in that their university had built no lasting ways for diverse students to connect with one another across their differences. One clue that you are on to a robust analytical metaphor is that it seems to draw many disparate narratives or parts of narratives together into some meaningful whole that serves to help you understand what is going on in your data (for more information about this study see Hesse-Biber et al., 2010). You might then begin to more formally memo about what you found out using some of these less formal ways of getting at meaning, which can serve to jump-start your data analysis journey. Let’s turn to the writing up of memos about what is going on in your data. Writing Memos. These two phases work hand in hand. In the exploration phase you read your textual and/or visual or audio data and think about it. Memos are the beginnings of analysis and interpretation of what it is you found through a given analytical procedure like grounded theory analysis. In the process of thinking about it, you begin to mark up your text by highlighting what you feel is important. You might write down these ideas in the form of a memo. I want to emphasize the importance of description during this phase. You might begin by summarizing what data you have collected thus far. Write down (memo) any ideas that come to you as you are reading your notes or interviews. What things fit together? What is problematic? You might think about using some visual aids—like diagrams—to help you think about ideas. What are the most telling quotes in your data? Researchers who want to get a closer picture of their data in order to build theory and to potentially draw out some findings engage in all of these “first run through the data” techniques. The following is a first run through the data of a study on African American women who attend predominantly white schools. Note how the memo describes the early background of the interviewee and some information about her family. The memo is linked to the text of the interview by noting the line numbers on which this information was gathered from the interview. Note that this memo goes beyond description and begins to move into analysis. The analytical moments of this memo are about putting information together that relates to the larger goals of the study that deal with issues of racial identity and body image and what impact, if any, attending a predominantly white school has on African American women’s sense of their racial identity and feelings about their body image. You will notice that the memo links issues of racial pride, self-esteem, and self-confidence as important factors that appear to “protect” women of color from white Western norms of beauty that have been shown in the research literature to be an important factor in the development of body image issues among Caucasian women. Although the linking of the factors in this memo is still tentative, we can see that we have the beginnings of an important set of linkages that the researcher needs to explore in more detail in other interview material. Memo: Initial Impression of Your Interview This participant was born and raised in Alabama by her mother and her brother. Her father and mother separated when she was two. She grew up in a house with her mother’s parents (her grandparents), her aunt and uncle, and her brother and her mother living in the basement. [lines 55–60] She had no interest in going to a school that had fraternities or sororities because she does not like how cliquey they get, how much they control your life, and how shallow and materialistic they are. She is not into cliques and being elitist about your group and she is not from money and not materialistic, so that was a no. Part of the appeal of her current college is that it has no fraternities or sororities. This participant comes from a very close-knit family, who all live close together or did in Alabama before she and her brother and mother moved to Michigan so that her mother could go back to school to Michigan State. But they used to spend every Sunday together: the traditional Southern picnic she called it. [lines 69–80] When her mother went back to school at Michigan State, she had to take on a more parental role when she was only seven and eight. She cooked, cleaned, and watched her little brother. [lines 129–131] This could explain her confidence, good work ethic, maturity, responsibility, et cetera. Her father has been in jail. [lines 155–160] This participant really respects her mother for going back to school to get her bachelor’s degree in order to make a better life for herself and her children, so that her children could have more opportunities than she had. [lines 167–170] I feel that having such a strong, positive, optimistic female role model helped mold this participant’s attitudes towards life. This participant and her mother have both been diagnosed with clinical depression. Her mother was on disability for it, and the participant is on medication for depression. [lines 196–201] This participant feels the need to persevere, despite all the hardships in her life, like her depression for example. She says that she draws a lot of inspiration to “never give up” from her family: both from seeing that they never give up and from feeling that if she does give up, she will be letting her family down, which she cannot do after all they have done for her to have a better life. “Everything I do is for my family.” [lines 265–280] This participant identifies herself as black American and refuses to call herself African American because she feels that this just simplifies what she is. She says she is part African, plus other ethnic backgrounds as well. [lines 283–297] This participant had a great sense of racial identity and self-worth and a great amount of self-confidence. I feel these three things are the reason why she never got drawn into the idea that she had to “be whiter” or into issues such as obsession with her body image that more white women appear to face in her predominately Caucasian college. Her identity, as she notes, is “black American.” Step 3. Specification and Reduction of Data Detailed memo writing provides a way to reflect on one’s data and to verbalize how categories are connected in the overall process, serving as an analytic bridge between theory and data collection. We used memo writing, such as in the previous example, in order to analyze our current interviews and to also point to specific areas or topics that we need to collect more information on, thus also playing a role in driving our future research questions. As salient topics emerged from the memos, these ideas were explored through subsequent interviews in the study. Coding Your Data. After gaining more familiarity with your data by simply reading it over several times and perhaps writing up a brief memo, as in the previous example, that contains your impressions about the participant and any ideas about what you think is going on in this interview, you might begin to code your interview data. The coding process can start as soon as you begin to collect some data; don’t wait for all your data to be collected. A little bit of data collection can reveal some important patterns, as we shall see in the excerpt of an interview with a black adolescent. Data collection and data analysis are iterative processes—the two work interactively. Let’s look at the following excerpt from a transcript of an African American student. In this excerpt she relates her experiences growing up in an affluent white community as a young girl, mostly hanging out with her white friends. As she transitioned to college, she relates how difficult it was for her to make friends with a group of African American female students she met through one of her classes. Transcript Excerpt: Coding Data From first grade to I think eighth grade, there were maybe four other black people besides myself … so I played soccer and basketball with a bunch of kids I’ve grown up with and they just happened to be white. I just never had an issue with my race growing up in my community. I was always just like, “Whatever, I’m just a kid.” My race had never really been a problem for me. When I came to college and tried to hang out with students from the black community I felt like there was no one in the black community that was really like me. They didn’t like the same music, didn’t speak the way I did, and didn’t come from the same background that I’d come from. I felt like I had to act like a stereotypical black girl or person. For the most part those black friends I wanted to hang out with thought I was acting “too white.” One girl said to me, “You are the whitest black girl I’ve ever known.” And I think when she had said that, that made me stop and think, I was like, “What do you mean?” I was kind of upset with her for saying that, I was like, “But I’m me.” What Is Coding? In its most basic form, coding is assigning meaning to a chunk of text. This chunk can be a word, several words, or full paragraphs. Coding involves both analysis and interpretation. Codes can take on many different forms, such as the following. Descriptive codes. This is where you assign a label or “tag” to a participant’s words. This form of coding serves as a way to organize your data, for example, by topic. Categorical codes. This is where you begin to group descriptive codes into a more general category of meaning that goes beyond being a descriptor. Analytical codes. This is where you begin to capture a broader range of meaning beyond describing your participant’s specific activities or range of specific events they relate to you. Instead, you begin to see the specific actions and feelings that collectively reveal what it is like for them to negotiate their identity as a whole. Being “too white” is an analytical code that captures a process that no specific words can capture. You as the researcher put together the descriptive codes and categories you originally coded into a general process of identity. The participant is relating to you through these codes, meaning that goes beyond description and categorical thinking. This type of coding takes time to capture the meaning and essence of this process. All the coding you do is important and leads to the formation of analytical ideas in your data. Even if you did circle “too white” in the first round of reading this excerpt, you still have to capture just what this analytical category means and how it plays out in your participant’s life. There are “ways into” coding your data. It’s important to know that there is no one “right” coding method. However, there are a few simple ways to code your data that also will give you a window into the layers of meanings contained in textual data (for a more detailed explanation of coding, see Saldana, 2013). The following are a few simple coding techniques you might try applying to your own data. First, begin by reviewing your research purpose and problem. Second, continue by familiarizing yourself with your data. It’s important to read over the data you want to code several times. Third, as you are reading over your data, think about what specific words or string of text you are drawn to. What pops out at you as you are reading? As you begin to read, try circling words or phrases you think are important in helping you to understand your data. Let’s apply this simple coding approach to the excerpt you just read. Here’s what I did to begin to extract meaning from this data. I began by reading over the excerpt several times. As I did so, I began to code the data by circling words and/or phrases in the text and applying a label (code) to words or phrases that I felt captured some crucial point or related some important meaning my participant was conveying. You will notice that the “codes” I circled are sometimes my participant’s exact words, such as “too white.” This is an in vivo code that is an important marker of meaning to pay attention to while you are reading over your interview. Your participant’s words sometimes provide you with a term or phrase that is an analytical window into how she perceives what is happening to her. That, in turn, provides you with insight into how she is negotiating, in this particular instance, her black identity at a predominately white college. Here is the initial set of codes I came up with when I started to read this excerpt through a process of thinking about the data. I placed my codes into several types. Coding Interview Excerpt Descriptive Codes “played soccer” “played basketball” “race” “just a kid” Categorical Codes Problems hanging out with black students Being myself Race not an issue Race as an issue Analytical Codes Acting black “too white” A Grounded Theory Approach to Coding The process of coding just described is loosely modeled after a “grounded theory” approach to the analysis of qualitative data. Grounded theory is a form of analysis developed initially by Glaser and Strauss (1967). This analysis perspective starts from an engagement with the data and ends with a theory that is generated from or grounded in the data. Kathy Charmaz’s (2004) work with grounded theory provides us with one important strategy for extracting meaning from qualitative data. Charmaz refines the ideas of grounded theory into a concise set of step-by-step analysis instructions. She takes the reader through the process of collecting data, analyzing, and writing memos. These components of the analysis work iteratively. As one collects the data, one is analyzing the data. One begins the process, says Charmaz, by doing “open coding.” This consists of literally reading line by line. One begins with carefully coding each line, sentence, and paragraph. Charmaz (2004) suggests asking the following questions during this process to assist with coding: What is going on? What are people doing? What is the person saying? What do these actions and statements take for granted? How do structure and context serve to support, maintain, impede, or change these actions and statements? (p. 507) Coding is a central part of a grounded theory approach and involves extracting meaning from nonnumerical data such as text and multimedia content. If we were to describe how the coding process was actually done, for example, with text materials such as interviews, it would sound something like this: coding usually consists of identifying meaningful chunks or segments in your textual data (in this case your interview) and giving each of these a label (code). Coding is the analysis strategy many qualitative researchers employ in order to help them locate key themes, patterns, ideas, and concepts that may exist within their data. Example of the Grounded Theory Coding Process Let’s return to our guiding example of body image. I collected interviews and participant observations for my research project that focused on how black American teens view their body image (Hesse-Biber, Howling, Leavy, & Lovejoy, 2004). I did not have any specific hypothesis to test out on the data, but instead, I was interested in discovering the following: How do black female American teens view their body image? I spent many hours observing and interviewing black American teenagers at a variety of local community centers in an inner city in the Northeast. I obtained hours of interviews with and observations of black American teens and recorded field note observations of the goings-on at each of the community centers for several years. Data collection and data analysis should proceed together—as soon as you begin to gather the first bit of data from the field, it is important to begin to make sense of it. In conducting such a study, you might begin the process of analysis by reading over and becoming familiar with the data collected after each visit to the community center. As you read these data you might be interested in marking up or highlighting anything you think is relevant to your understanding of how black American women perceive their identity and body image. You might then apply a name or code to each of these segments, such as “positive body image.” Some segments of text may contain more than one code. Your coding procedure is open ended and holistic. Your goal is to gain insight and understanding. You do not have a predefined set of coding categories. Your analysis procedure is primarily inductive and requires an immersion of yourself in the text until themes, concepts, or dimensions of concepts arise from the data. You would especially look for the common ways or patterns of behavior whereby individuals come to terms with their body image and identity. This process is both doing analysis (discerning what the data say) and interpretation (what you think it means). This is a disciplined process, and you are constantly interrogating (testing) your interpretations against the data you’re collecting in an ongoing iterative manner. Photo 11.3 The marking up of the text is used to locate those segments that you believe are important. © iStockphoto.com/TommL How Do You Code Data? Grounded theory is a “line by line” coding technique whereby you develop categories by coding as you are reading (see the following excerpt). As you can see from this example, some codes are descriptive or literal codes—these words appear within the text and are usually descriptive codes. Others in the code list are more interpretative (e.g., “internal self-assessment”). These codes are not tied as tightly to the text itself but begin to rely on the researcher’s insights for drawing out interpretation. This type of coding relies on more focused coding. A focused coding procedure allows for the building and clarifying of concepts: a researcher examines all the data in a category, compares each piece of data with every other piece, and finally builds a clear working definition of each concept, which is then named. This name becomes the code (Charmaz, 1983). Focused coding also requires that a researcher develop a set of analytical categories that require the researcher to move toward a broader interpretation of what is going on in his or her data, rather than just labeling data in a topical fashion. Modifying code categories becomes important in order to develop more abstract code categories from which one can generate theoretical constructs. So, for instance, in the previous example, we identify the category “internal self-assessment,” but we can also see some additional codes that might help us clarify the meaning of this concept from the participant’s perspective. As your coding progresses, you will have an opportunity to expand on the varied ways in which participants talk about internal self-assessment as a process. To get from the more literal to the conceptual level of analysis, you might mark up what you see as the different and similar ways the participant talks about the idea of internal self-assessment. You might begin to memo about this idea (see the memo on internal self-assessment that follows). As more and more interviews are analyzed and you continue to memo about what is going on in your data, you may come up with several analytical dimensions or subcodes to the concept of internal self-assessment (such as the subcode “ignores external”).
WE’VE HAD A GOOD SUCCESS RATE ON THIS ASSIGNMENT. PLACE THIS ORDER OR A SIMILAR ORDER WITH PapersSpot AND GET AN AMAZING DISCOUNT
Initial Codes From an Excerpt of an Interview With an African American Teenager (see Hesse-Biber et al., 2004) Excerpt Initial Code I don’t think that the ideal woman has to Ideal woman look like anything personally. I think the ideal woman Importance of personality has personality and character, it’s how you act. My looks don’t bother me, it’s just my personality. Physical appearance is secondary My personality. I wanna have a good personality Importance of personality and have people like me, if they don’t like me for Importance of personality my personality, or just because of my looks, then they must be missing out on something. Missing out on noticing personality Um, when you have it [self-esteem] Self-esteem so much that you don’t care what people Don’t care what others say think about you. I man, I flaunt my self-esteem, Flaunting myself not like “Oh yeah, dahdadada,” I just sit up real Sits straight straight and that shows self-esteem right there. I’m a woman, I’ll wear stuff to school that’s like … wacked. Wears what she wants I have earrings that are about this big, and that Wears big earrings shows my self-esteem, I don’t care what you say Doesn’t care what others say about them…. Oh well, that’s what I think, I don’t care, I don’t fit in anywhere anyway, Internal self-assessment: own person I’m my own self so why can’t I act like that, Internal self-assessment why can’t I dress like that? Wears what she wants Going From Initial Codes to More Focused Codes Initial Code (Literal Code) Evolves Into Analytical (More Focused) Code don’t care what others say internal self-assessment: ignores external flaunting self supercharged identity: belief in abilities sitting straight supercharged identity: being proud wears what she wants internal self-assessment: ignores external wears big earrings internal self-assessment: ignores external How Can Writing a Memo Assist With Coding Data? By writing memos one can raise a code to the level of a category and/or analytical code. The idea of a grounded theory approach is to read carefully through the data and to uncover the major categories and analytical concepts and ultimately the properties of these categories and concepts and their interrelationships. Memo writing is an integral part of the grounded theory process and assists the researcher in elaborating on his or her ideas regarding data and code categories. Ideally, memo writing takes place at all points within the analysis process. Reading through and sorting memos can also aid the researcher in integrating his or her ideas and may even serve to bring up new ideas and relationships within the data. The grounded theory approach represents only one of many analysis strategies, such as doing a content analysis of your data as described in Chapter 9. If you are interested in teasing out the storylines in your data, you might conduct a narrative analysis and so on. There is no right or wrong way to synthesize data, and often the researcher jumps back and forth between collection, analysis, and writing. I have suggested some specific analysis strategies to accompany each research method presented in the book. However, a grounded theory approach is a widely used analytical technique that spans several research method approaches, from the analysis of interviews to ethnographic field observations. By memoing on the code “internal self-assessment” (see the following Behind the Scenes), the researcher is encouraged to theorize about the meaning of this analytical concept and the ways in which it may be related to other factors. In fact, internal self-assessment was found to be related to the code categories “cultural pressures to be thin” and “racism.” In analyzing my data (see Hesse-Biber et al., 2004), I found that black American girls often protect themselves from the cultural pressures of white Western norms of beauty by adopting a stance of internal self-assessment. The process of internal self-assessment was found to be an early coping strategy young children learn within their communities to deal with racial discrimination from the wider society (Hesse-Biber et al., 2004). Let’s go behind the scenes and look at a memo that was written for this project. The qualitative coding process consists of cycles of coding and memoing, as we can observe in Figure 11.1. Figure 11.1 Coding and Memoing: A Dynamic Process Behind the Scenes: Memo on Internal Self-Assessment An interesting and significant pattern of responses emerged in the interviews that we captured with the code “internal self-assessment.” This code category describes an orientation in which the self assesses itself according to a set of internal standards rather than by the (external) judgments of others. Typically this type of response emerged in relation to questions about whether the participant was worried about her weight or appearance or felt pressured to look or act a certain way by peers or the media. Participants answered this kind of question with an assertion that they didn’t care about what others thought about them (ignores external), or that they were only concerned about how good they feel about themselves (listens to internal), or some combination of the two (ignores external listens internal)—for instance: “I don’t care what others say, as long as I look good to myself, it doesn’t matter what people say.” Nineteen participants made statements that could be characterized as demonstrating the orientation of internal self-assessment. Often these kinds of statements characterized the assertion that the participants loved or felt good about themselves the way they were and that they were not willing to change in order to please others. Some participants said they learned this attitude from their mother or father. Significantly, this strategy or attitude protects these girls from the judgments of others and may make them less susceptible to white Western norms of beauty and the propensity to lose themselves in their efforts to please and attract men. In fact, several participants said that they did not feel pressured to please men (in terms of skin color, body size, and other aspects of appearance) because it is more important in their view to feel good about themselves. It is unclear from the interviews to what extent this strategy is based on a kind of defensive denial or on genuine self-acceptance and maturity. This attitude may be a coping strategy developed in the black community in response to racism and societal devaluation. For instance, when asked what it means for her to be a black female, one girl said that it meant “to be strong with what I’m doing and you know I can’t really worry about what other people think.” This strategy may also develop in response to often fierce teasing by peers that many of these girls also describe in their interviews. Subcodes for Internal Self-Assessment Several subcodes were arrived at for further reflection on the overall meaning of the concept of internal self-assessment: Ignores external—Participant indicates that she doesn’t care or is not worried about others’ judgments about her. Not willing to change in order to please others. Listens to internal—Participant indicates that what matters to her is how she feels about herself or what’s on the inside. Often the participant asserts that the important thing is that she likes, loves, or feels good about herself. Ignores external listens internal—Participant indicates that she doesn’t care what others think of her because she feels good about herself, or it only matters what she thinks about herself. Source: Written by Meg Lovejoy, from Hesse-Biber, S. N., Howling, S. A., Leavy, P., & Lovejoy, M. (2004). Racial identity and the development of body image issues among African American adolescent girls. The Qualitative Report, 9(1), 49–79. Step 4. Interpretation It is important to note that analysis and interpretation are not necessarily two distinct phases in the qualitative research process, as we have seen in the case of grounded theory analysis. The process is much more fluid, as the researcher often engages simultaneously in the processes of data collection, data analysis, and interpretation of research findings. With early observations in the field or with the first interviews conducted, early memo writing will allow the researcher to look at which ideas seem plausible and which ones they ought to revise. See David Karp’s notes concerning memo writing on pages 324–325. Whether data are collected from fieldwork observations or intensive interviewing, the researcher is involved with qualitative data at an intimate level. As we transition from problems with data collection and coding to issues of writing up research results, other questions begin to emerge concerning the interpretation of qualitative data. At the heart of this questioning are issues of power and control over the interpretation process. We now turn to another important way in which the researcher’s social attributes can impact the research by looking at issues of interpretation. One of the central issues to examine in this discussion of the interpretation of findings is the extent to which power differences between the researcher and researched impact the research findings and the researcher’s assessment of what they mean (the interpretation process). What power does the researcher have in determining whose voice will be heard in the interpretation of research findings? This question is of central importance in the work of Katherine Borland (1991). She explores the range of interpretive conflicts in the oral narrative she conducts with her grandmother, Beatrice Hanson. She asks her grandmother to relay the story about a trip to the Bangor, Maine, Fair Grounds on which she accompanied her father to the racetrack, an event that happened over 42 years ago. Borland is interested in understanding the different levels of meaning making that take place in the telling and interpretation of oral narratives. She recognizes that there are multiple levels of interpreting narratives. A first-level narrative story—that is, the story her grandmother tells her—conveys the particular way her grandmother constitutes the meaning of the event. There is, however, a second level of meaning to the narrative. This is the meaning the researcher constructs, filtered through the personal experience and expertise of the researcher. Borland listens to her grandmother’s story and reshapes it by filtering the story through her own personal life experiences and outward experiences—keeping in mind the expectations of her scholarly peers, to whom, she notes, “we must display a degree of scholarly competence” (Borland, 1991, p. 73). Borland uses a gender-specific theoretical lens to interpret her grandmother’s story as a feminist account. However, her grandmother does not agree with her interpretation. In dealing with these issues of authority or ownership of the narrative, Borland raises issues about who has the authority to interpret narrative accounts. For Borland, the answer lies in a type of delicate balancing act. Borland shows her interpretation to her grandmother, and the process of exchanging ideas and interpretations begins. It is clear that no story should remain unmediated. In other words, the storyteller’s viewpoint ought to be present within the interpretation. Although not all conflicts can be resolved, it is important that the researcher be challenged by the narrator’s point of view. The exchange of points of view might provide new ways of understanding the data. David Karp on Memo Writing Especially at the beginning you will hear people say things that you just hadn’t thought about. Look carefully for major directions that just had not occurred to you to take. The pace of short memo writing ought to be especially lengthy toward the beginning of your work. I would advocate the “idea” or “concept” memos that introduce an emerging idea. Such memos typically run two to three pages. After pondering the ideas in the memos and coding the interviews—when you think you have been able to grab on to a theme—it is time to begin a data memo. By this I mean a memo that integrates the theme with data and any available literature that fits. A data memo begins to look like a paper. In a data memo always array more data on a point than you would actually use in a research paper. If you make a broad point and feel that you have 10 good pieces of data that fit that point, lay them all out for inspection and later use. Also, make sure to lay out the words of people who do not fit the pattern. How Do You Establish Validity and Reliability of Interpretation? Now that you have interpreted your qualitative data, how do you know your interpretation is valid and reliable? When thinking about the validity of the research findings, you can put your interpretations against competing knowledge claims and see how your findings stand up. You should also provide strong arguments for any knowledge claims you draw from your data. Ask yourself the following: What factors make the research findings resonate for you? Beyond this, we suggest following Kvale’s (1996) three-part model for judging the validity of qualitative data: validity as craftsmanship, communicative validity, and pragmatic validity. These dimensions of validity were discussed in detail in Chapter 2, but at this point in the research we suggest addressing the following points (derived from Kvale, 1996): Are you telling a convincing story? Try theorizing from your data interpretations. Have you reached your findings with integrity? Have you checked your procedures? Look for and address negative cases. Make your interpretations available for discussion (agreement and debate) among “legitimate knowers” (others in the social scientific community). How do your findings impact those who participated in the research, and how do your findings impact the wider social context in which the research occurred? Once you have gone through this list of checks, and the research findings resonate with you, validity has been appropriately considered. Reliability with regard to qualitative data means there is internal consistency to the data you collected. A good way to think of reliability in a qualitative approach is to reflect on the extent to which the data you collected make sense overall. Reliability and Validity Checks Internal reliability in qualitative approaches to research demands a high degree of agreement between the codes and what participants are saying. How can you ensure reliability in analyzing your data? One thing you might do is have two researchers from your project (who were carefully trained in the same coding procedures) use your code categories to code the same interview and then compare the extent to which their coding of interviews was in agreement with your coding of the data. Where there is disagreement you might bring in a third coder who would then code the interview independently and offer a third opinion on those aspects of the interviews where there was coding disagreement. In addition, it’s important to practice reflexivity through memoing on your core categories as new data are collected. This will provide you with an internal dialogue if there is not an opportunity to have a second coder address issues of reliability and validity. An important validity strategy is to do “member checking.” This occurs when you ask your participant to weigh in on your thoughts concerning an analytical idea (code) you feel captures a central meaning in the interviews you have been conducting thus far. Photo 11.4 Through member checking and creating an environment of trust, your participants will tell you what they think and where they might tweak your interpretation. © iStockphoto.com/Leonardo Patrizi Another way to establish the validity of your study and to gain important analytical insights into your data beyond memoing and coding your data individually is to engage in a dialogue with others who are also analyzing the same data you are working on. What I found extremely helpful was to form a series of dialogue sessions with some of the members of my research team who were also analyzing interview data for our project on the lived experiences of African American college students who attend predominately white colleges. As the senior researcher and author, I wanted to go over some of the major thematic categories that appeared to be emerging from the intensive interview data I gathered and shared with my research team. I wanted to discuss and reflect more on the range of different racial identity groups I felt were emerging from the interview data I had already collected. It appeared to me that there were very different groups of women in my study who had very different types of experiences. Doing this type of validity and reliability checking serves to head off any potential interpretative issues with your data along the way and ensures you are deeply listening to your participants’ standpoint on the specific research questions you seek to answer. Figure 11.2 sums up the four steps of data analysis and interpretation. As we move from one step to another, we begin to reduce and collapse our data. Coding helps to reduce our data, and memoing assists with thinking about how to organize our data into meaningful categories and patterns. Figure 11.2 Steps in Data Analysis and Interpretation: A Visual Model Software for Qualitative Data Analysis As researchers collect many pages of text, they may want to use a software program to analyze their data. However, important analysis and interpretation issues may arise when using such an analysis tool (Hesse-Biber & Leavy, 2004): Should a researcher employ a computer software program at all? After all, isn’t analysis more of an art form? Will the software program interfere with the creative process of analysis? Will using a software program make the researcher more distant from the data? As researchers begin the process of turning their research data into a finished product, they may find that their analysis is highly complex. They can be overwhelmed by the mounds of research data consisting of unanalyzed text that may reach thousands of pages. Miles and Huberman (1984) note: A chronic problem of qualitative research is that it is done chiefly with words, not with numbers. Words are fatter than numbers and usually have multiple meanings. This makes them harder to move around and work with. Worse still, most words are meaningless unless you look backward or forward to other words…. Numbers, by contrast, are usually less ambiguous and may be processed with more economy…. Small wonder, then, that most researchers prefer working with numbers alone, or getting the words they collect translated into numbers as quickly as possible…. [However] converting words into numbers, then tossing away the words gets a researcher into all kinds of mischief…. Focusing solely on numbers shifts our attention from substance to arithmetic, and thereby throws out the whole notion of qualitativeness; one would have done better to have started with numbers in the first place. (p. 546) The use of computer software packages can enhance a researcher’s analysis. As Fielding and Lee (1998) note, the work of researchers over the past two decades has been transformed by software programs. Such programs can be categorized into two main types. The first consists of generic software not specifically designed for qualitative research. The second type of software is specifically designed for qualitative data analysis. These packages fall into four types: code and retrieve programs, code-based theory-building programs, conceptual network-building programs, and textual mapping software. Code and retrieve programs allow codes to be assigned to particular segments of text and make for easy retrieval of code categories using sophisticated Boolean search functions (e.g., using and, or, and not to filter your data). Code-based theory-building programs allow the researcher to analyze the systematic relationships among the data, codes, and code categories. Some programs provide a rule-based systems approach that allows for the testing of hypotheses in the data, whereas others allow for a visual representation of the data. Conceptual network-building and textual mapping software programs allow researchers to draw links between code categories in their data. Researchers see these last two as add-on features to their code-based theory building programs. Miles, Huberman, and Saldana (2014) note the following uses of software in analyzing qualitative data. Fielding and Lee (1998) note that the field of qualitative software development has grown over time, and there is a growing and extensive international community of software users. The growing usage of software programs as tools in qualitative analysis raises a number of methodological and theoretical concerns regarding data analysis and interpretation of qualitative data. I discuss (Hesse-Biber, 1995) five fears that critics frequently express in discussing the use of software. The first of these fears is that computer programs will separate the qualitative researcher from the creative process. Some analysts liken the experience of doing qualitative work to artistic work, and the use of computer technology is often seen as incompatible with art. There is a strong fear that the use of computer programs will turn the researcher into an unthinking and unfeeling human being. Another fear is that the line between quantitative and qualitative analysis will be blurred by imposing the logic of survey research onto qualitative research and by sacrificing in-depth analysis for a larger sample. These concerns stem from the fact that software programs now permit the easy coding and retrieval of large numbers of documents. The volume of data now collected for some qualitative studies is comparable to quantitative research, and there is the fear that qualitative research will be reduced to quantitative research. Additional issues discussed include the fear that computer usage may dictate the definition of a particular field of study. Software program structures often set requirements for how a research project should proceed. This raises concerns among some critics that software programs will determine the types of questions asked and specific data analysis plans. Another concern is that researchers will now have to be more accountable for their analysis. Computer programs for analyzing qualitative data require the researcher to be more explicit in the procedures and analytical processes they went through to produce their data and their interpretations. Asking qualitative researchers to be more explicit about their method and holding their interpretations accountable to tests of validity and reliability will raise some controversies. Should there be strict tests of validity and reliability for qualitative data? There is also the fear of lost confidentiality through the use of multimedia data. Uses of Computer Software in Qualitative Studies Making notes in the field Writing up or transcribing field notes Editing: correcting, extending, or revising field notes Coding: attaching keywords or tags to segments of text to permit later retrieval Storage: keeping text in an organized database Search and retrieval: locating relevant segments of text and making them available for inspection Data “linking”: connecting relevant data segments to each other; forming categories, clusters, or networks of information Memoing: writing reflective commentaries on some aspect of the data as a basis for deeper analysis Content analysis: counting frequencies, sequence, or locations of words and phrases Data display: placing selected or reduced data in a condensed, organized format, such as a matrix or network, for inspection Conclusion-drawing and verification: aiding the analyst to interpret displayed data and to test or confirm findings Theory building: developing systematic, conceptually coherent explanations of findings; testing hypotheses Graphic mapping: creating diagrams that depict findings or theories Preparing interim and final reports Source: Miles, Huberman, & Saldana, 2014, p. 46. Which Software Program Should I Choose? There are a range of qualitative data analysis software tools available, and the best way to choose which type of program will work for you is to peruse the CAQDAS (Computer Assisted Qualitative Data Analysis) website (http://caqdas.soc.surrey.ac.uk), which lists all the available software programs with information on how to download a demo version of each to try out. This website also contains workshop information on software demonstrations and a variety of resources for learning more about using qualitative software. Hesse-Biber and Crofts (2008) suggest the following set of reflective questions to consider when choosing a qualitative software program. This checklist is partly derived from the wisdom of Renata Tesch (1990) and Eben Weitzman and Matthew Miles (1995). The perspective taken by Hesse-Biber and Crofts is grounded in a user’s perspective. The user should prioritize which questions are most relevant for his or her research agenda. What type of computer system do you prefer to work on or feel most comfortable working on? Does the program support your operating system? Do you need to upgrade your system or perhaps purchase a new computer to meet the requirements of a specific program? Do you like the look and feel of a program’s interface? What excites you about this program at a visceral level? Does the look and feel of the program resonate with your own research style? What is your analysis style? How do you plan to conduct your analysis, and how might computers fit into that style? How might each program enhance (or detract from) your analysis? In what sense? For example, do you plan on coding most of your data? What type of coding do you want to do? How do you prefer your data be retrieved, and how important is it to you to be able to look at the full context from which the data were taken? Are you a visual person? Do you like to see relationships and concepts selected in some type of diagram or network? Do you anticipate quantifying any of your data? For which research project or set of projects do you anticipate using a software program? For example, what type of data does your project consist of—textual, multimedia? How do you want a computer program to assist you? What tasks do you want to mechanize? What specific tasks do you want computerized? You may not want all the features espoused by these programs. What are your expectations of what the program will be able to assist you in doing? Are your expectations realistic? What resources are available to you? Which programs can your computer support? Which programs can you afford? What resources (time, personnel, material) necessary for learning how to use this program are available to you? What are your preconceptions about these programs? How have other users’ opinions, product marketing, or other sources of information about qualitative data analysis software programs influenced your preferences? Are your assumptions about programs accurate? What more would you like to learn about particular programs? Which of these questions or concerns are most important to you? How would you rank your most important factors in considering a software purchase? What questions have been left out? (Hesse-Biber & Crofts, 2008) Reflecting on these types of user concerns before attempting to select a qualitative data analysis program puts users in the position to critically evaluate for themselves how each program might integrate into their unique research projects. By trying out free demonstration versions and reading through each program’s features, as well as looking at examples of how one’s colleagues use these programs, researchers can get a feel for how each of these programs may be of use to them. It is important to also note that there is no technological tool, regardless of its features, that can independently perform your analysis (Bazeley, 2010; Hesse-Biber & Crofts, 2008). How Can I Use a Software Program to Analyze My Qualitative Data? Until several decades ago, most qualitative research consisted of amassing and manipulating data by hand using these types of manual procedures. The use of a software program can definitely provide a quicker way to code and retrieve your qualitative data. Pfaffenberger (1998) notes that qualitative data analysis breaks down qualitative material into its “constituent elements that need to be ‘compared,’ named, and classified so that their nature and interaction becomes clear” (p. 26). To compare aspects of your qualitative data analysis requires the researcher to decontextualize and recontextualize these data (Tesch, 1990). Decontextualization means that segments of your data are first looked at in isolation from their particular contexts. These segments are linked to other decontextualized segments that appear to contain the same meanings or ideas. Assembling like segments into groupings or categories, a process known as recontextualizing your data, provides a mechanism for discovering larger themes or patterns in your data that reveal a new level of understanding your data as a whole. Researchers often repeat the process of decontextualizing and recontextualizing their data, and this is where a software program can help with the coding and retrieval of text segments. A software program’s ability to assist with these basic analytical procedures also allows the researcher to test and question the themes and ideas he or she has discovered, by searching for “negative cases” in order to question the original thematic groupings of their data. There are a variety of analytical procedures that range from taking a grounded theory approach to your to a narrative approach. Hesse-Biber and Crofts (2008) note: It is crucial to remember, however, that not all qualitative research approaches and traditions use the inductive analytic methods (such as a “grounded theory approach” to analysis). Narrative analysts are interested in stories and want to code and retrieve narratives looking at their inherent structure, such things as the chronological sequence of events in a narrative. Other researchers prefer to analyze their data utilizing theories prior to their collection of data. Burawoy (1991) suggests an “extended case method” of data analysis that begins with explicit theorizing about what the researcher hopes to find in conducting a given research project. This method then uses the specific research study to test out critical components of the researcher’s theoretical framework with the idea data are collected in order to reconfigure existing theory by subjecting them to empirical verification. The researcher’s theory drives all aspects of the data project. (p. 658) A Data Analysis Tale by Sharlene Nagy Hesse-Biber Dissertation data had been occupying one room of my apartment, often spread out over the floor organized into neat and sometimes not-so-neat piles. Many months had been spent in this room devoted to managing and analyzing a set of almost 80 in-depth intensive interviews. With scissors in hand, I first read over a set of new interviews and proceeded to “cut up” all relevant chunks of textual data from each interview and paste similarly coded data bites into a separate file folder. However, each time my analytical/conceptual scheme changed, categories would have to be completely altered. If I wanted to apply a different code to the same chunk of text, I needed to recopy the segment. As my data analysis proceeded, I found myself revising and deleting some previously coded category. This also required me to photocopy interviews again and repeat part of the coding process. My ability to assign multiple codes to text and to recode different segments was often thwarted by the “cut and paste” procedure. While I liked the idea of seeing and handling all of my data in its entirety, as the interviews increased, it became more and more difficult to see the “big picture.” Creating memos on different aspects of my analysis and coding procedure was a critical step in assisting with the discovery of some major code categories and themes in my data as well as relationships between code categories. It was during this time that I discovered a set of “key sort” data cards or “edge-punched cards.” These cards were the new technological rage at the time, especially among anthropologists working in the late 70s and early 80s. They were 8” x 3” cards ringed with holes that were numbered across the edge of the card. I placed all my interview material on these cards and proceeded to code the data on each card by punching open the numbered ring corresponding to that code. You could conceivably have up to one hundred codes for any given card, but usually I had between five or twenty codes per card, given the information contained on any given hole card. Periodically, I would assemble or “stack up” all the cards and begin to retrieve my code categories from the deck of cards using a rod, or what I called my “knitting needle,” which was inserted through the circular holes in my stack of cards. If I was interested in retrieving a particular code or set of codes for my study, I would put my knitting needle through those relevant code numbered holes, shake the pile, and out would drop all the data chunks for that code. In fact, I would sometimes have a great time shaking the deck, and I can remember curious onlookers asking me if I was “OK” as all my coded cards came tumbling out of the pile so I could retrieve my data bounty to analyze. I would repeat this process of coding and retrieving as my analysis process proceeded by hand until I felt I had sufficiently captured the meaning of a specific code by comparing and contrasting different chunks of similarly coded data or latched onto a significant pattern in my data. Source: Adapted from Hesse-Biber & Crofts, 2008. Conclusion Here is a list of questions you might consider in undertaking your own evaluation of the analysis and interpretation section of your research project. This evaluation checklist is not exhaustive but is meant to highlight some of the important factors you might take into account. Overall Research Question Is my research question clearly stated? Is the question too broad? Too narrow? Data Collection Do the data fit the research question? Method Is the method compatible with the purpose (research question)? How well are your data collection strategies described? Sample How did you choose participants? Are these participants a valid choice for your research? Analysis How did you arrive at your specific findings? Are specific analysis strategies talked about? Have you done what you said you would do? Are data analysis approaches compatible with your research question? Interpretation Can readers get a sense (gestalt) of the meaning of your data from your written findings? Are your research findings placed in context of the literature on the topic? Does the evidence fit your data? Are the data congruent with your research question? Validity: Issues of Credibility and Trustworthiness Why should the reader buy into the validity of analysis and interpretation? What are some criteria for assessing the validity of your research study? Do participants recognize their own experiences in your analysis and interpretation of the data? Why or why not? Do you provide an audit trail of your work? Can the reader follow the analytical steps (i.e., audit trail) you provide as evidence of credibility? The more transparent you are about these issues, the higher the probability that your reader will find your findings trustworthy and credible. Conclusion Does your conclusion reflect your research findings? Have you overstated what you have found (i.e., gone beyond your research findings)? Qualitative data analysis and interpretation proceed as an iterative—back and forth—process, keeping in mind the metaphor suggested in Chapter 7 regarding putting together the pieces of a puzzle. A little bit of data can go a long way in gathering meaning, and one should not be tempted to gather too much data while failing to reflect on the data bit by bit. What are required are a creative spirit and a set of analytical and interpretative skills. Coding and memoing are two powerful techniques you might employ in the process of understanding and interpreting your data. You may encounter false starts as well as moments of discovery and generation of theoretical insight into the analysis and interpretation of your data. This type of work is not for the fainthearted. It often requires attention to detail and perseverance in the face of chaos, as well as a knack for tolerating ambiguity. The writing up of your research also requires that you, the researcher, be reflective of your own positionality—the set of social and economic attributes you bring to bear in analyzing and interpreting your data. It is a journey well worth taking, for the journey leads to our understanding and capturing of the lived reality of those we research. Glossary Analysis and interpretation. Data analysis is how you go about fully summarizing and representing the data you collected. Interpretation asks the question of what it means. It’s how you make meaning of what you have analyzed. These two terms are intricately connected in that the process of meaning making is an iterative process that involves an ongoing analysis and interpretation of the data along the way. It means subjecting your interpretation and comparing and contrasting it to what you found. Analytical categories. Analytical categories are developed in order to classify the more focused analytical codes. They take into account the meaning of concepts from the participant’s perspective. Analytical codes. Analytical codes, developed from literal codes, are not tied as tightly to the text itself but begin to rely on the researcher’s insights for drawing out interpretation. Analytical dimensions. As more and more interviews are analyzed, you may come up with several analytical dimensions, which can be viewed as subcodes of analytical categories. Categorical codes. This is where you begin to group descriptive codes into a more general category of meaning that goes beyond being just a descriptor. So, for example, descriptive codes such as “weight is my priority” or “I am dieting every day” would be grouped into a more analytical category such as “values thinness.” Coding. Coding generally consists of identifying chunks or segments in your textual data and giving each of these a label (code). Coding is the analytical strategy many qualitative researchers employ in order to help them locate key themes, patterns, ideas, and concepts that may exist within their data. Descriptive codes. Descriptive codes within one’s data, discovered during the analysis process, eventually can be used to generate a set of key concepts (categories) that are much more analytical (see categorical codes in this glossary). Focused coding. A focused coding procedure allows for the building and clarifying of concepts. In focused coding a researcher examines all the data in a category, compares each piece of data with every other piece, and finally builds a clear working definition of each concept, which is then named. Internal self-assessment. A code that is interpretative and not tied as tightly to the text itself but begins to rely on the researcher’s insights for drawing out interpretation. Literal codes. Literal codes are codes consisting of words that appear within the text itself. They are usually descriptive codes. Memoing. Memoing, or memo writing, is the writing of documents that track any ideas the researcher comes up with when reading notes, interviews, and so on. Memoing should be done at all points in the analysis process. Discussion Questions Discuss the differences between coding and memoing. What are the differences between a code and a category? What is a grounded theory approach to coding? Why is this an inductive process? What are some of the advantages and disadvantages of using a software program to analyze your qualitative data? In what sense is transcribing your data also analyzing your data? Provide an example. What are more informal strategies for dividing your data than using a grounded theory approach? Resources Computer-Assisted Qualitative Data Analysis: http://www.surrey.ac.uk/sociology/research/researchcentres/caqdas This website offers workshops and training sessions (as well as general information) about using computer-assisted programs to analyze qualitative data. This is a great website for those interested in exploring computer-assisted analysis. Online QDA: Learning Qualitative Data Analysis on the Web: http://onlineqda.hud.ac.uk/Introduction/index.php A comprehensive website that covers different types of qualitative data analysis procedures. Companion Website: study.sagepub.com/hessebiber3e The companion website features selected full-text SAGE journal articles and mobile-friendly practice quizzes that align with key concepts from this chapter.