Chapter 13: Unobtrusive Research: Qualitative And Quantitative Approaches
13.3 Unobtrusive Methods
This section focuses on how to gather data unobtrusively and what to do with those data once they have been collected. A variety of ways exist for gathering data unobtrusively. For these purposes we will focus on three: content analysis, physical trace, and archival methods.
Content Analysis
One way of conducting unobtrusive research is to analyze texts. Texts come in all formats. At its core, content analysis addresses the questions of “Who says what, to whom, why, how, and with what effect?” (Babbie, 2010, pp. 328–329). Content analysis is a type of unobtrusive research that involves the study of human communications. Another way to think of content analysis is as a way of studying texts and their meaning. Here we use a more liberal definition of text than you might find in your dictionary. The text that content analysts investigate includes such things as actual written copy (e.g., newspapers, letters, and communiques) and content that we might see or hear (e.g., speeches or other performances). Content analysts might also investigate more visual representations of human communication, such as television shows, advertisements, or movies. Content analysis can also be an effective way to investigate policy change over time. For example, Sheppard and Fennell (2019) utilized a content analysis approach to examine public sector tourism policies from around the world over a time span of approximately 30 years. In their research, they were looking for evidence of growing concern for the environment and welfare of animals used in the tourism experience (e.g., beasts of burden, racing, fighting, competitions, hunting, guides, captivity/entertainment, etc.).
One important point of note is that content analysis is usually concerned with analyzing primary sources of data. In other words, the data is original. In contrast, secondary sources, are those that have already been analyzed. The distinction between primary and secondary sources is important for many aspects of social science, but it is especially important to understand when conducting content analysis. Less frequently, a content analysis can involve the analysis of secondary sources. In those instances where secondary sources are analyzed, the researcher’s focus is usually on the process by which the original analyst or presenter of data reached his conclusions, or the choices that were made in terms of how and in what ways to present the data.
Sometimes students new to research methods struggle to grasp the difference between a content analysis of secondary sources and a review of literature, which was discussed in Chapter 5 “The Literature Review”. With a review of literature, researchers analyze secondary materials to try to understand what we know and what we do not know about a particular topic. The sources used to conduct a scholarly review of the literature are typically peer-reviewed sources, written by trained scholars, published in some academic journal or press, and based on empirical research that has been conducted using accepted techniques of data collection for the discipline (scholarly theoretical pieces are included in literature reviews as well). These sources are reviewed in order to arrive at some conclusion about our overall knowledge about a topic. Findings are generally taken at face value
A content analysis of scholarly literature would raise questions not raised in a literature review. A content analyst might examine scholarly articles to learn something about the authors (e.g., who publishes what, and where?); publication outlets (e.g., how well do different journals represent the diversity of the discipline?); or topics (e.g., how has the popularity of topics shifted over time?). A content analysis of scholarly articles would be a study of the studies, as opposed to a review of the studies. For example, Sheppard and Fennell wanted to understand whether tourism policy demonstrated a growing concern over time for animal welfare. The researchers conducted their content analysis of different policies from around the world, looking for words that were associated with concern for animal welfare. Occurrences of these words were counted. In this example, the researchers were not aiming to summarize the content of the tourism policies; rather, they were attempting to learn something about how the policies had evolved over time to demonstrate concern for animals, if at all.
Content analysis can be qualitative or quantitative, and often researchers will use both strategies to strengthen their investigations. In qualitative content analysis the aim is to identify themes in the text being analyzed, and to identify the underlying meaning of those themes. Quantitative content analysis, on the other hand, involves assigning numerical values to raw data so that it can be analyzed using various statistical procedures. Sheppard and Fennell used both qualitative and quantitative approaches in their content analysis. They utilized quantitative approaches by counting the occurrences of words that they considered to be associated with concern for the welfare of animals impacted by tourism. They also used qualitative approaches by drawing blocks of text or sentences into their analysis of the various policies to demonstrate how the policies indicated or did not indicate concern for animal welfare. We will elaborate on how qualitative and quantitative researchers collect, code, and analyze unobtrusive data in the final portion of this section.
One of the most significant challenges related to content analysis is the potential to reproduce the data (Krippendorff, 2004a, p. 215). Krippendorff (2004b) suggests that an agreement coefficient can be utilized as an indicator of reliability. He explains the relationship between agreement and reliability, stating that agreement is what we measure, while reliability is what we wish to inform from the measurement. While beyond our purposes here, Krippendorff (2004b) compares seven different agreement coefficients and makes recommendations for testing reliability in content analysis. See Section 13.4 for suggestions on improving reliability in content analysis.
Physical Trace
Content is not the only sort of data that researchers can collect unobtrusively. Unobtrusive researchers might also be interested in analyzing the evidence that humans leave behind that tells us something about who they are or what they do. This kind evidence includes the physical traces left by humans and the material artifacts that tell us something about their beliefs, values, or norms. Fire and police will examine scenes for “trace” evidence such as fingerprints, fire starter or retardant, DNA etc. to help solve the mystery of what happened. From a medical point of view, trace evidence can be used to assist paramedics and doctors to determine what has happened – whether there is bruising, cuts, pupil dilation, etc.
There are two types of physical traces: erosion and accretion. Erosion refers to the wearing away, or removal, of material because of a physical activity (e.g., a worn foot path). On the other hand, accretion is the building up of material because of physical activity (e.g., a pile of garbage) (Palys & Atchison, 2014).
One challenge with analyzing physical traces and material artifacts is that you generally do not have access to the people who left the traces or created the artifacts that you are analyzing. (And if you did find a way to contact them, in so doing, your research would no longer qualify as unobtrusive!) It can be especially tricky to analyze the meanings of these materials if they come from a historical or cultural context other than your own. Situating the traces or artifacts you wish to analyze both in their original contexts and in your own is not always easy, and can lead to problems related to validity and reliability. How do you know that you are viewing an object or physical trace in the way that it was intended to be viewed? Do you have the necessary understanding or knowledge about the background of its original creators or users to understand where they were coming from when they created it?
While physical traces and material artifacts make excellent sources of data, analyzing their meaning takes more than simply trying to understand them from your own contextual position. You must also be aware of who caused the physical trace or created the artifact, when they created it, why they created it, and for whom they created it. Answering these questions will require accessing materials in addition to the traces or artifacts themselves. It may require accessing historical documents or, if it is a contemporary trace or artifact, perhaps another method of data collection such as interviews with its creators.
Archival Measures
Archival measures are hard copy documents or records, including written or tape-recorded material, photographs, newspapers, books, magazines, diaries, and letters. Webpages are also a source of archival measures and can include documents, images, videos, and audio files, in addition to written materials (Palys & Atichison, 2014). While one might state that archival measures are just another form of accretion measure, because they are the products of human activity; however, they are defined separately due to significant differences and also the vast quantity of materials that are classified as archival measures.
There are many benefits to using archival measures. For example, they enable a researcher to look at historical evidence, providing an indication of social processes. As such, archival measures gel well with longitudinal studies. However, one thing to consider is that the sources one may be interested in as it relates to archival measures were not created with the goal in mind for a researcher to review them. As a result, the reasons for the documents’ creation, and what may have influenced the content of the document, should be given consideration and critical thought. In some cases, researchers will use data from previous studies to assess the material from another angle. Survey data are frequently used in this way by researchers. Issues like memory fade, telescoping and the like, which influence how people respond to questions in a survey, remain an issue for researchers doing secondary analysis, regardless of how good the questions are.
Another advantage of archival methods is that the researcher can look at all relevant records, or the entire “population,” assuming the records have been digitized. In such cases, the researcher does not need to worry about choosing a representative sample. Rather, the researcher can analyse all of the relevant records (the entire population) with the use of a computer.