Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Case Study? | Definition, Examples & Methods

What Is a Case Study? | Definition, Examples & Methods

Published on May 8, 2019 by Shona McCombes . Revised on November 20, 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case, other interesting articles.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

case study is a type of quantitative research

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

TipIf your research is more practical in nature and aims to simultaneously investigate an issue as you solve it, consider conducting action research instead.

Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.

Example of a representative case studyIn the 1920s, two sociologists used Muncie, Indiana as a case study of a typical American city that supposedly exemplified the changing culture of the US at the time.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.

Example of a mixed methods case studyFor a case study of a wind farm development in a rural area, you could collect quantitative data on employment rates and business revenue, collect qualitative data on local people’s perceptions and experiences, and analyze local and national media coverage of the development.

The aim is to gain as thorough an understanding as possible of the case and its context.

Prevent plagiarism. Run a free check.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved February 21, 2024, from https://www.scribbr.com/methodology/case-study/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, what is your plagiarism score.

Academic Success Center

Qualitative & Quantitative Research Support

  • NVivo Group and Study Sessions
  • SPSS This link opens in a new window
  • Statistical Analysis Group sessions
  • Using Qualtrics
  • Dissertation and Data Analysis Group Sessions
  • Research Process Flow Chart
  • Research Alignment This link opens in a new window
  • Step 1: Seek Out Evidence
  • Step 2: Explain
  • Step 3: The Big Picture
  • Step 4: Own It
  • Step 5: Illustrate
  • Annotated Bibliography
  • Literature Review This link opens in a new window
  • Systematic Reviews & Meta-Analyses
  • How to Synthesize and Analyze
  • Synthesis and Analysis Practice
  • Synthesis and Analysis Group Sessions
  • Problem Statement
  • Purpose Statement
  • Quantitative Research Questions
  • Qualitative Research Questions
  • Trustworthiness of Qualitative Data
  • Analysis and Coding Example- Qualitative Data
  • Thematic Data Analysis in Qualitative Design
  • Dissertation to Journal Article This link opens in a new window
  • International Journal of Online Graduate Education (IJOGE) This link opens in a new window
  • Journal of Research in Innovative Teaching & Learning (JRIT&L) This link opens in a new window

Writing a Case Study

Hands holding a world globe

What is a case study?

A Map of the world with hands holding a pen.

A Case study is: 

  • An in-depth research design that primarily uses a qualitative methodology but sometimes​​ includes quantitative methodology.
  • Used to examine an identifiable problem confirmed through research.
  • Used to investigate an individual, group of people, organization, or event.
  • Used to mostly answer "how" and "why" questions.

What are the different types of case studies?

Man and woman looking at a laptop

Note: These are the primary case studies. As you continue to research and learn

about case studies you will begin to find a robust list of different types. 

Who are your case study participants?

Boys looking through a camera

What is triangulation ? 

Validity and credibility are an essential part of the case study. Therefore, the researcher should include triangulation to ensure trustworthiness while accurately reflecting what the researcher seeks to investigate.

Triangulation image with examples

How to write a Case Study?

When developing a case study, there are different ways you could present the information, but remember to include the five parts for your case study.

Man holding his hand out to show five fingers.

Was this resource helpful?

  • << Previous: Thematic Data Analysis in Qualitative Design
  • Next: Journal Article Reporting Standards (JARS) >>
  • Last Updated: Feb 21, 2024 9:32 AM
  • URL: https://resources.nu.edu/researchtools

NCU Library Home

  • Privacy Policy
  • SignUp/Login

Research Method

Home » Quantitative Research – Methods, Types and Analysis

Quantitative Research – Methods, Types and Analysis

Table of Contents

What is Quantitative Research

Quantitative Research

Quantitative research is a type of research that collects and analyzes numerical data to test hypotheses and answer research questions . This research typically involves a large sample size and uses statistical analysis to make inferences about a population based on the data collected. It often involves the use of surveys, experiments, or other structured data collection methods to gather quantitative data.

Quantitative Research Methods

Quantitative Research Methods

Quantitative Research Methods are as follows:

Descriptive Research Design

Descriptive research design is used to describe the characteristics of a population or phenomenon being studied. This research method is used to answer the questions of what, where, when, and how. Descriptive research designs use a variety of methods such as observation, case studies, and surveys to collect data. The data is then analyzed using statistical tools to identify patterns and relationships.

Correlational Research Design

Correlational research design is used to investigate the relationship between two or more variables. Researchers use correlational research to determine whether a relationship exists between variables and to what extent they are related. This research method involves collecting data from a sample and analyzing it using statistical tools such as correlation coefficients.

Quasi-experimental Research Design

Quasi-experimental research design is used to investigate cause-and-effect relationships between variables. This research method is similar to experimental research design, but it lacks full control over the independent variable. Researchers use quasi-experimental research designs when it is not feasible or ethical to manipulate the independent variable.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This research method involves manipulating the independent variable and observing the effects on the dependent variable. Researchers use experimental research designs to test hypotheses and establish cause-and-effect relationships.

Survey Research

Survey research involves collecting data from a sample of individuals using a standardized questionnaire. This research method is used to gather information on attitudes, beliefs, and behaviors of individuals. Researchers use survey research to collect data quickly and efficiently from a large sample size. Survey research can be conducted through various methods such as online, phone, mail, or in-person interviews.

Quantitative Research Analysis Methods

Here are some commonly used quantitative research analysis methods:

Statistical Analysis

Statistical analysis is the most common quantitative research analysis method. It involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis can be used to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.

Regression Analysis

Regression analysis is a statistical technique used to analyze the relationship between one dependent variable and one or more independent variables. Researchers use regression analysis to identify and quantify the impact of independent variables on the dependent variable.

Factor Analysis

Factor analysis is a statistical technique used to identify underlying factors that explain the correlations among a set of variables. Researchers use factor analysis to reduce a large number of variables to a smaller set of factors that capture the most important information.

Structural Equation Modeling

Structural equation modeling is a statistical technique used to test complex relationships between variables. It involves specifying a model that includes both observed and unobserved variables, and then using statistical methods to test the fit of the model to the data.

Time Series Analysis

Time series analysis is a statistical technique used to analyze data that is collected over time. It involves identifying patterns and trends in the data, as well as any seasonal or cyclical variations.

Multilevel Modeling

Multilevel modeling is a statistical technique used to analyze data that is nested within multiple levels. For example, researchers might use multilevel modeling to analyze data that is collected from individuals who are nested within groups, such as students nested within schools.

Applications of Quantitative Research

Quantitative research has many applications across a wide range of fields. Here are some common examples:

  • Market Research : Quantitative research is used extensively in market research to understand consumer behavior, preferences, and trends. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform marketing strategies, product development, and pricing decisions.
  • Health Research: Quantitative research is used in health research to study the effectiveness of medical treatments, identify risk factors for diseases, and track health outcomes over time. Researchers use statistical methods to analyze data from clinical trials, surveys, and other sources to inform medical practice and policy.
  • Social Science Research: Quantitative research is used in social science research to study human behavior, attitudes, and social structures. Researchers use surveys, experiments, and other quantitative methods to collect data that can inform social policies, educational programs, and community interventions.
  • Education Research: Quantitative research is used in education research to study the effectiveness of teaching methods, assess student learning outcomes, and identify factors that influence student success. Researchers use experimental and quasi-experimental designs, as well as surveys and other quantitative methods, to collect and analyze data.
  • Environmental Research: Quantitative research is used in environmental research to study the impact of human activities on the environment, assess the effectiveness of conservation strategies, and identify ways to reduce environmental risks. Researchers use statistical methods to analyze data from field studies, experiments, and other sources.

Characteristics of Quantitative Research

Here are some key characteristics of quantitative research:

  • Numerical data : Quantitative research involves collecting numerical data through standardized methods such as surveys, experiments, and observational studies. This data is analyzed using statistical methods to identify patterns and relationships.
  • Large sample size: Quantitative research often involves collecting data from a large sample of individuals or groups in order to increase the reliability and generalizability of the findings.
  • Objective approach: Quantitative research aims to be objective and impartial in its approach, focusing on the collection and analysis of data rather than personal beliefs, opinions, or experiences.
  • Control over variables: Quantitative research often involves manipulating variables to test hypotheses and establish cause-and-effect relationships. Researchers aim to control for extraneous variables that may impact the results.
  • Replicable : Quantitative research aims to be replicable, meaning that other researchers should be able to conduct similar studies and obtain similar results using the same methods.
  • Statistical analysis: Quantitative research involves using statistical tools and techniques to analyze the numerical data collected during the research process. Statistical analysis allows researchers to identify patterns, trends, and relationships between variables, and to test hypotheses and theories.
  • Generalizability: Quantitative research aims to produce findings that can be generalized to larger populations beyond the specific sample studied. This is achieved through the use of random sampling methods and statistical inference.

Examples of Quantitative Research

Here are some examples of quantitative research in different fields:

  • Market Research: A company conducts a survey of 1000 consumers to determine their brand awareness and preferences. The data is analyzed using statistical methods to identify trends and patterns that can inform marketing strategies.
  • Health Research : A researcher conducts a randomized controlled trial to test the effectiveness of a new drug for treating a particular medical condition. The study involves collecting data from a large sample of patients and analyzing the results using statistical methods.
  • Social Science Research : A sociologist conducts a survey of 500 people to study attitudes toward immigration in a particular country. The data is analyzed using statistical methods to identify factors that influence these attitudes.
  • Education Research: A researcher conducts an experiment to compare the effectiveness of two different teaching methods for improving student learning outcomes. The study involves randomly assigning students to different groups and collecting data on their performance on standardized tests.
  • Environmental Research : A team of researchers conduct a study to investigate the impact of climate change on the distribution and abundance of a particular species of plant or animal. The study involves collecting data on environmental factors and population sizes over time and analyzing the results using statistical methods.
  • Psychology : A researcher conducts a survey of 500 college students to investigate the relationship between social media use and mental health. The data is analyzed using statistical methods to identify correlations and potential causal relationships.
  • Political Science: A team of researchers conducts a study to investigate voter behavior during an election. They use survey methods to collect data on voting patterns, demographics, and political attitudes, and analyze the results using statistical methods.

How to Conduct Quantitative Research

Here is a general overview of how to conduct quantitative research:

  • Develop a research question: The first step in conducting quantitative research is to develop a clear and specific research question. This question should be based on a gap in existing knowledge, and should be answerable using quantitative methods.
  • Develop a research design: Once you have a research question, you will need to develop a research design. This involves deciding on the appropriate methods to collect data, such as surveys, experiments, or observational studies. You will also need to determine the appropriate sample size, data collection instruments, and data analysis techniques.
  • Collect data: The next step is to collect data. This may involve administering surveys or questionnaires, conducting experiments, or gathering data from existing sources. It is important to use standardized methods to ensure that the data is reliable and valid.
  • Analyze data : Once the data has been collected, it is time to analyze it. This involves using statistical methods to identify patterns, trends, and relationships between variables. Common statistical techniques include correlation analysis, regression analysis, and hypothesis testing.
  • Interpret results: After analyzing the data, you will need to interpret the results. This involves identifying the key findings, determining their significance, and drawing conclusions based on the data.
  • Communicate findings: Finally, you will need to communicate your findings. This may involve writing a research report, presenting at a conference, or publishing in a peer-reviewed journal. It is important to clearly communicate the research question, methods, results, and conclusions to ensure that others can understand and replicate your research.

When to use Quantitative Research

Here are some situations when quantitative research can be appropriate:

  • To test a hypothesis: Quantitative research is often used to test a hypothesis or a theory. It involves collecting numerical data and using statistical analysis to determine if the data supports or refutes the hypothesis.
  • To generalize findings: If you want to generalize the findings of your study to a larger population, quantitative research can be useful. This is because it allows you to collect numerical data from a representative sample of the population and use statistical analysis to make inferences about the population as a whole.
  • To measure relationships between variables: If you want to measure the relationship between two or more variables, such as the relationship between age and income, or between education level and job satisfaction, quantitative research can be useful. It allows you to collect numerical data on both variables and use statistical analysis to determine the strength and direction of the relationship.
  • To identify patterns or trends: Quantitative research can be useful for identifying patterns or trends in data. For example, you can use quantitative research to identify trends in consumer behavior or to identify patterns in stock market data.
  • To quantify attitudes or opinions : If you want to measure attitudes or opinions on a particular topic, quantitative research can be useful. It allows you to collect numerical data using surveys or questionnaires and analyze the data using statistical methods to determine the prevalence of certain attitudes or opinions.

Purpose of Quantitative Research

The purpose of quantitative research is to systematically investigate and measure the relationships between variables or phenomena using numerical data and statistical analysis. The main objectives of quantitative research include:

  • Description : To provide a detailed and accurate description of a particular phenomenon or population.
  • Explanation : To explain the reasons for the occurrence of a particular phenomenon, such as identifying the factors that influence a behavior or attitude.
  • Prediction : To predict future trends or behaviors based on past patterns and relationships between variables.
  • Control : To identify the best strategies for controlling or influencing a particular outcome or behavior.

Quantitative research is used in many different fields, including social sciences, business, engineering, and health sciences. It can be used to investigate a wide range of phenomena, from human behavior and attitudes to physical and biological processes. The purpose of quantitative research is to provide reliable and valid data that can be used to inform decision-making and improve understanding of the world around us.

Advantages of Quantitative Research

There are several advantages of quantitative research, including:

  • Objectivity : Quantitative research is based on objective data and statistical analysis, which reduces the potential for bias or subjectivity in the research process.
  • Reproducibility : Because quantitative research involves standardized methods and measurements, it is more likely to be reproducible and reliable.
  • Generalizability : Quantitative research allows for generalizations to be made about a population based on a representative sample, which can inform decision-making and policy development.
  • Precision : Quantitative research allows for precise measurement and analysis of data, which can provide a more accurate understanding of phenomena and relationships between variables.
  • Efficiency : Quantitative research can be conducted relatively quickly and efficiently, especially when compared to qualitative research, which may involve lengthy data collection and analysis.
  • Large sample sizes : Quantitative research can accommodate large sample sizes, which can increase the representativeness and generalizability of the results.

Limitations of Quantitative Research

There are several limitations of quantitative research, including:

  • Limited understanding of context: Quantitative research typically focuses on numerical data and statistical analysis, which may not provide a comprehensive understanding of the context or underlying factors that influence a phenomenon.
  • Simplification of complex phenomena: Quantitative research often involves simplifying complex phenomena into measurable variables, which may not capture the full complexity of the phenomenon being studied.
  • Potential for researcher bias: Although quantitative research aims to be objective, there is still the potential for researcher bias in areas such as sampling, data collection, and data analysis.
  • Limited ability to explore new ideas: Quantitative research is often based on pre-determined research questions and hypotheses, which may limit the ability to explore new ideas or unexpected findings.
  • Limited ability to capture subjective experiences : Quantitative research is typically focused on objective data and may not capture the subjective experiences of individuals or groups being studied.
  • Ethical concerns : Quantitative research may raise ethical concerns, such as invasion of privacy or the potential for harm to participants.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Case Study Research

Case Study – Methods, Examples and Guide

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Case Study | Definition, Examples & Methods

Case Study | Definition, Examples & Methods

Published on 5 May 2022 by Shona McCombes . Revised on 30 January 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating, and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyse the case.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Prevent plagiarism, run a free check.

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

Unlike quantitative or experimental research, a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

If you find yourself aiming to simultaneously investigate and solve an issue, consider conducting action research . As its name suggests, action research conducts research and takes action at the same time, and is highly iterative and flexible. 

However, you can also choose a more common or representative case to exemplify a particular category, experience, or phenomenon.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews, observations, and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data .

The aim is to gain as thorough an understanding as possible of the case and its context.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis, with separate sections or chapters for the methods , results , and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyse its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, January 30). Case Study | Definition, Examples & Methods. Scribbr. Retrieved 19 February 2024, from https://www.scribbr.co.uk/research-methods/case-studies/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, correlational research | guide, design & examples, a quick guide to experimental design | 5 steps & examples, descriptive research design | definition, methods & examples.

Case Studies

This guide examines case studies, a form of qualitative descriptive research that is used to look at individuals, a small group of participants, or a group as a whole. Researchers collect data about participants using participant and direct observations, interviews, protocols, tests, examinations of records, and collections of writing samples. Starting with a definition of the case study, the guide moves to a brief history of this research method. Using several well documented case studies, the guide then looks at applications and methods including data collection and analysis. A discussion of ways to handle validity, reliability, and generalizability follows, with special attention to case studies as they are applied to composition studies. Finally, this guide examines the strengths and weaknesses of case studies.

Definition and Overview

Case study refers to the collection and presentation of detailed information about a particular participant or small group, frequently including the accounts of subjects themselves. A form of qualitative descriptive research, the case study looks intensely at an individual or small participant pool, drawing conclusions only about that participant or group and only in that specific context. Researchers do not focus on the discovery of a universal, generalizable truth, nor do they typically look for cause-effect relationships; instead, emphasis is placed on exploration and description.

Case studies typically examine the interplay of all variables in order to provide as complete an understanding of an event or situation as possible. This type of comprehensive understanding is arrived at through a process known as thick description, which involves an in-depth description of the entity being evaluated, the circumstances under which it is used, the characteristics of the people involved in it, and the nature of the community in which it is located. Thick description also involves interpreting the meaning of demographic and descriptive data such as cultural norms and mores, community values, ingrained attitudes, and motives.

Unlike quantitative methods of research, like the survey, which focus on the questions of who, what, where, how much, and how many, and archival analysis, which often situates the participant in some form of historical context, case studies are the preferred strategy when how or why questions are asked. Likewise, they are the preferred method when the researcher has little control over the events, and when there is a contemporary focus within a real life context. In addition, unlike more specifically directed experiments, case studies require a problem that seeks a holistic understanding of the event or situation in question using inductive logic--reasoning from specific to more general terms.

In scholarly circles, case studies are frequently discussed within the context of qualitative research and naturalistic inquiry. Case studies are often referred to interchangeably with ethnography, field study, and participant observation. The underlying philosophical assumptions in the case are similar to these types of qualitative research because each takes place in a natural setting (such as a classroom, neighborhood, or private home), and strives for a more holistic interpretation of the event or situation under study.

Unlike more statistically-based studies which search for quantifiable data, the goal of a case study is to offer new variables and questions for further research. F.H. Giddings, a sociologist in the early part of the century, compares statistical methods to the case study on the basis that the former are concerned with the distribution of a particular trait, or a small number of traits, in a population, whereas the case study is concerned with the whole variety of traits to be found in a particular instance" (Hammersley 95).

Case studies are not a new form of research; naturalistic inquiry was the primary research tool until the development of the scientific method. The fields of sociology and anthropology are credited with the primary shaping of the concept as we know it today. However, case study research has drawn from a number of other areas as well: the clinical methods of doctors; the casework technique being developed by social workers; the methods of historians and anthropologists, plus the qualitative descriptions provided by quantitative researchers like LePlay; and, in the case of Robert Park, the techniques of newspaper reporters and novelists.

Park was an ex-newspaper reporter and editor who became very influential in developing sociological case studies at the University of Chicago in the 1920s. As a newspaper professional he coined the term "scientific" or "depth" reporting: the description of local events in a way that pointed to major social trends. Park viewed the sociologist as "merely a more accurate, responsible, and scientific reporter." Park stressed the variety and value of human experience. He believed that sociology sought to arrive at natural, but fluid, laws and generalizations in regard to human nature and society. These laws weren't static laws of the kind sought by many positivists and natural law theorists, but rather, they were laws of becoming--with a constant possibility of change. Park encouraged students to get out of the library, to quit looking at papers and books, and to view the constant experiment of human experience. He writes, "Go and sit in the lounges of the luxury hotels and on the doorsteps of the flophouses; sit on the Gold Coast settees and on the slum shakedowns; sit in the Orchestra Hall and in the Star and Garter Burlesque. In short, gentlemen [sic], go get the seats of your pants dirty in real research."

But over the years, case studies have drawn their share of criticism. In fact, the method had its detractors from the start. In the 1920s, the debate between pro-qualitative and pro-quantitative became quite heated. Case studies, when compared to statistics, were considered by many to be unscientific. From the 1930's on, the rise of positivism had a growing influence on quantitative methods in sociology. People wanted static, generalizable laws in science. The sociological positivists were looking for stable laws of social phenomena. They criticized case study research because it failed to provide evidence of inter subjective agreement. Also, they condemned it because of the few number of cases studied and that the under-standardized character of their descriptions made generalization impossible. By the 1950s, quantitative methods, in the form of survey research, had become the dominant sociological approach and case study had become a minority practice.

Educational Applications

The 1950's marked the dawning of a new era in case study research, namely that of the utilization of the case study as a teaching method. "Instituted at Harvard Business School in the 1950s as a primary method of teaching, cases have since been used in classrooms and lecture halls alike, either as part of a course of study or as the main focus of the course to which other teaching material is added" (Armisted 1984). The basic purpose of instituting the case method as a teaching strategy was "to transfer much of the responsibility for learning from the teacher on to the student, whose role, as a result, shifts away from passive absorption toward active construction" (Boehrer 1990). Through careful examination and discussion of various cases, "students learn to identify actual problems, to recognize key players and their agendas, and to become aware of those aspects of the situation that contribute to the problem" (Merseth 1991). In addition, students are encouraged to "generate their own analysis of the problems under consideration, to develop their own solutions, and to practically apply their own knowledge of theory to these problems" (Boyce 1993). Along the way, students also develop "the power to analyze and to master a tangled circumstance by identifying and delineating important factors; the ability to utilize ideas, to test them against facts, and to throw them into fresh combinations" (Merseth 1991).

In addition to the practical application and testing of scholarly knowledge, case discussions can also help students prepare for real-world problems, situations and crises by providing an approximation of various professional environments (i.e. classroom, board room, courtroom, or hospital). Thus, through the examination of specific cases, students are given the opportunity to work out their own professional issues through the trials, tribulations, experiences, and research findings of others. An obvious advantage to this mode of instruction is that it allows students the exposure to settings and contexts that they might not otherwise experience. For example, a student interested in studying the effects of poverty on minority secondary student's grade point averages and S.A.T. scores could access and analyze information from schools as geographically diverse as Los Angeles, New York City, Miami, and New Mexico without ever having to leave the classroom.

The case study method also incorporates the idea that students can learn from one another "by engaging with each other and with each other's ideas, by asserting something and then having it questioned, challenged and thrown back at them so that they can reflect on what they hear, and then refine what they say" (Boehrer 1990). In summary, students can direct their own learning by formulating questions and taking responsibility for the study.

Types and Design Concerns

Researchers use multiple methods and approaches to conduct case studies.

Types of Case Studies

Under the more generalized category of case study exist several subdivisions, each of which is custom selected for use depending upon the goals and/or objectives of the investigator. These types of case study include the following:

Illustrative Case Studies These are primarily descriptive studies. They typically utilize one or two instances of an event to show what a situation is like. Illustrative case studies serve primarily to make the unfamiliar familiar and to give readers a common language about the topic in question.

Exploratory (or pilot) Case Studies These are condensed case studies performed before implementing a large scale investigation. Their basic function is to help identify questions and select types of measurement prior to the main investigation. The primary pitfall of this type of study is that initial findings may seem convincing enough to be released prematurely as conclusions.

Cumulative Case Studies These serve to aggregate information from several sites collected at different times. The idea behind these studies is the collection of past studies will allow for greater generalization without additional cost or time being expended on new, possibly repetitive studies.

Critical Instance Case Studies These examine one or more sites for either the purpose of examining a situation of unique interest with little to no interest in generalizability, or to call into question or challenge a highly generalized or universal assertion. This method is useful for answering cause and effect questions.

Identifying a Theoretical Perspective

Much of the case study's design is inherently determined for researchers, depending on the field from which they are working. In composition studies, researchers are typically working from a qualitative, descriptive standpoint. In contrast, physicists will approach their research from a more quantitative perspective. Still, in designing the study, researchers need to make explicit the questions to be explored and the theoretical perspective from which they will approach the case. The three most commonly adopted theories are listed below:

Individual Theories These focus primarily on the individual development, cognitive behavior, personality, learning and disability, and interpersonal interactions of a particular subject.

Organizational Theories These focus on bureaucracies, institutions, organizational structure and functions, or excellence in organizational performance.

Social Theories These focus on urban development, group behavior, cultural institutions, or marketplace functions.

Two examples of case studies are used consistently throughout this chapter. The first, a study produced by Berkenkotter, Huckin, and Ackerman (1988), looks at a first year graduate student's initiation into an academic writing program. The study uses participant-observer and linguistic data collecting techniques to assess the student's knowledge of appropriate discourse conventions. Using the pseudonym Nate to refer to the subject, the study sought to illuminate the particular experience rather than to generalize about the experience of fledgling academic writers collectively.

For example, in Berkenkotter, Huckin, and Ackerman's (1988) study we are told that the researchers are interested in disciplinary communities. In the first paragraph, they ask what constitutes membership in a disciplinary community and how achieving membership might affect a writer's understanding and production of texts. In the third paragraph they state that researchers must negotiate their claims "within the context of his sub specialty's accepted knowledge and methodology." In the next paragraph they ask, "How is literacy acquired? What is the process through which novices gain community membership? And what factors either aid or hinder students learning the requisite linguistic behaviors?" This introductory section ends with a paragraph in which the study's authors claim that during the course of the study, the subject, Nate, successfully makes the transition from "skilled novice" to become an initiated member of the academic discourse community and that his texts exhibit linguistic changes which indicate this transition. In the next section the authors make explicit the sociolinguistic theoretical and methodological assumptions on which the study is based (1988). Thus the reader has a good understanding of the authors' theoretical background and purpose in conducting the study even before it is explicitly stated on the fourth page of the study. "Our purpose was to examine the effects of the educational context on one graduate student's production of texts as he wrote in different courses and for different faculty members over the academic year 1984-85." The goal of the study then, was to explore the idea that writers must be initiated into a writing community, and that this initiation will change the way one writes.

The second example is Janet Emig's (1971) study of the composing process of a group of twelfth graders. In this study, Emig seeks to answer the question of what happens to the self as a result educational stimuli in terms of academic writing. The case study used methods such as protocol analysis, tape-recorded interviews, and discourse analysis.

In the case of Janet Emig's (1971) study of the composing process of eight twelfth graders, four specific hypotheses were made:

  • Twelfth grade writers engage in two modes of composing: reflexive and extensive.
  • These differences can be ascertained and characterized through having the writers compose aloud their composition process.
  • A set of implied stylistic principles governs the writing process.
  • For twelfth grade writers, extensive writing occurs chiefly as a school-sponsored activity, or reflexive, as a self-sponsored activity.

In this study, the chief distinction is between the two dominant modes of composing among older, secondary school students. The distinctions are:

  • The reflexive mode, which focuses on the writer's thoughts and feelings.
  • The extensive mode, which focuses on conveying a message.

Emig also outlines the specific questions which guided the research in the opening pages of her Review of Literature , preceding the report.

Designing a Case Study

After considering the different sub categories of case study and identifying a theoretical perspective, researchers can begin to design their study. Research design is the string of logic that ultimately links the data to be collected and the conclusions to be drawn to the initial questions of the study. Typically, research designs deal with at least four problems:

  • What questions to study
  • What data are relevant
  • What data to collect
  • How to analyze that data

In other words, a research design is basically a blueprint for getting from the beginning to the end of a study. The beginning is an initial set of questions to be answered, and the end is some set of conclusions about those questions.

Because case studies are conducted on topics as diverse as Anglo-Saxon Literature (Thrane 1986) and AIDS prevention (Van Vugt 1994), it is virtually impossible to outline any strict or universal method or design for conducting the case study. However, Robert K. Yin (1993) does offer five basic components of a research design:

  • A study's questions.
  • A study's propositions (if any).
  • A study's units of analysis.
  • The logic that links the data to the propositions.
  • The criteria for interpreting the findings.

In addition to these five basic components, Yin also stresses the importance of clearly articulating one's theoretical perspective, determining the goals of the study, selecting one's subject(s), selecting the appropriate method(s) of collecting data, and providing some considerations to the composition of the final report.

Conducting Case Studies

To obtain as complete a picture of the participant as possible, case study researchers can employ a variety of approaches and methods. These approaches, methods, and related issues are discussed in depth in this section.

Method: Single or Multi-modal?

To obtain as complete a picture of the participant as possible, case study researchers can employ a variety of methods. Some common methods include interviews , protocol analyses, field studies, and participant-observations. Emig (1971) chose to use several methods of data collection. Her sources included conversations with the students, protocol analysis, discrete observations of actual composition, writing samples from each student, and school records (Lauer and Asher 1988).

Berkenkotter, Huckin, and Ackerman (1988) collected data by observing classrooms, conducting faculty and student interviews, collecting self reports from the subject, and by looking at the subject's written work.

A study that was criticized for using a single method model was done by Flower and Hayes (1984). In this study that explores the ways in which writers use different forms of knowing to create space, the authors used only protocol analysis to gather data. The study came under heavy fire because of their decision to use only one method.

Participant Selection

Case studies can use one participant, or a small group of participants. However, it is important that the participant pool remain relatively small. The participants can represent a diverse cross section of society, but this isn't necessary.

For example, the Berkenkotter, Huckin, and Ackerman (1988) study looked at just one participant, Nate. By contrast, in Janet Emig's (1971) study of the composition process of twelfth graders, eight participants were selected representing a diverse cross section of the community, with volunteers from an all-white upper-middle-class suburban school, an all-black inner-city school, a racially mixed lower-middle-class school, an economically and racially mixed school, and a university school.

Often, a brief "case history" is done on the participants of the study in order to provide researchers with a clearer understanding of their participants, as well as some insight as to how their own personal histories might affect the outcome of the study. For instance, in Emig's study, the investigator had access to the school records of five of the participants, and to standardized test scores for the remaining three. Also made available to the researcher was the information that three of the eight students were selected as NCTE Achievement Award winners. These personal histories can be useful in later stages of the study when data are being analyzed and conclusions drawn.

Data Collection

There are six types of data collected in case studies:

  • Archival records.
  • Interviews.
  • Direct observation.
  • Participant observation.

In the field of composition research, these six sources might be:

  • A writer's drafts.
  • School records of student writers.
  • Transcripts of interviews with a writer.
  • Transcripts of conversations between writers (and protocols).
  • Videotapes and notes from direct field observations.
  • Hard copies of a writer's work on computer.

Depending on whether researchers have chosen to use a single or multi-modal approach for the case study, they may choose to collect data from one or any combination of these sources.

Protocols, that is, transcriptions of participants talking aloud about what they are doing as they do it, have been particularly common in composition case studies. For example, in Emig's (1971) study, the students were asked, in four different sessions, to give oral autobiographies of their writing experiences and to compose aloud three themes in the presence of a tape recorder and the investigator.

In some studies, only one method of data collection is conducted. For example, the Flower and Hayes (1981) report on the cognitive process theory of writing depends on protocol analysis alone. However, using multiple sources of evidence to increase the reliability and validity of the data can be advantageous.

Case studies are likely to be much more convincing and accurate if they are based on several different sources of information, following a corroborating mode. This conclusion is echoed among many composition researchers. For example, in her study of predrafting processes of high and low-apprehensive writers, Cynthia Selfe (1985) argues that because "methods of indirect observation provide only an incomplete reflection of the complex set of processes involved in composing, a combination of several such methods should be used to gather data in any one study." Thus, in this study, Selfe collected her data from protocols, observations of students role playing their writing processes, audio taped interviews with the students, and videotaped observations of the students in the process of composing.

It can be said then, that cross checking data from multiple sources can help provide a multidimensional profile of composing activities in a particular setting. Sharan Merriam (1985) suggests "checking, verifying, testing, probing, and confirming collected data as you go, arguing that this process will follow in a funnel-like design resulting in less data gathering in later phases of the study along with a congruent increase in analysis checking, verifying, and confirming."

It is important to note that in case studies, as in any qualitative descriptive research, while researchers begin their studies with one or several questions driving the inquiry (which influence the key factors the researcher will be looking for during data collection), a researcher may find new key factors emerging during data collection. These might be unexpected patterns or linguistic features which become evident only during the course of the research. While not bearing directly on the researcher's guiding questions, these variables may become the basis for new questions asked at the end of the report, thus linking to the possibility of further research.

Data Analysis

As the information is collected, researchers strive to make sense of their data. Generally, researchers interpret their data in one of two ways: holistically or through coding. Holistic analysis does not attempt to break the evidence into parts, but rather to draw conclusions based on the text as a whole. Flower and Hayes (1981), for example, make inferences from entire sections of their students' protocols, rather than searching through the transcripts to look for isolatable characteristics.

However, composition researchers commonly interpret their data by coding, that is by systematically searching data to identify and/or categorize specific observable actions or characteristics. These observable actions then become the key variables in the study. Sharan Merriam (1988) suggests seven analytic frameworks for the organization and presentation of data:

  • The role of participants.
  • The network analysis of formal and informal exchanges among groups.
  • Historical.
  • Thematical.
  • Ritual and symbolism.
  • Critical incidents that challenge or reinforce fundamental beliefs, practices, and values.

There are two purposes of these frameworks: to look for patterns among the data and to look for patterns that give meaning to the case study.

As stated above, while most researchers begin their case studies expecting to look for particular observable characteristics, it is not unusual for key variables to emerge during data collection. Typical variables coded in case studies of writers include pauses writers make in the production of a text, the use of specific linguistic units (such as nouns or verbs), and writing processes (planning, drafting, revising, and editing). In the Berkenkotter, Huckin, and Ackerman (1988) study, for example, researchers coded the participant's texts for use of connectives, discourse demonstratives, average sentence length, off-register words, use of the first person pronoun, and the ratio of definite articles to indefinite articles.

Since coding is inherently subjective, more than one coder is usually employed. In the Berkenkotter, Huckin, and Ackerman (1988) study, for example, three rhetoricians were employed to code the participant's texts for off-register phrases. The researchers established the agreement among the coders before concluding that the participant used fewer off-register words as the graduate program progressed.

Composing the Case Study Report

In the many forms it can take, "a case study is generically a story; it presents the concrete narrative detail of actual, or at least realistic events, it has a plot, exposition, characters, and sometimes even dialogue" (Boehrer 1990). Generally, case study reports are extensively descriptive, with "the most problematic issue often referred to as being the determination of the right combination of description and analysis" (1990). Typically, authors address each step of the research process, and attempt to give the reader as much context as possible for the decisions made in the research design and for the conclusions drawn.

This contextualization usually includes a detailed explanation of the researchers' theoretical positions, of how those theories drove the inquiry or led to the guiding research questions, of the participants' backgrounds, of the processes of data collection, of the training and limitations of the coders, along with a strong attempt to make connections between the data and the conclusions evident.

Although the Berkenkotter, Huckin, and Ackerman (1988) study does not, case study reports often include the reactions of the participants to the study or to the researchers' conclusions. Because case studies tend to be exploratory, most end with implications for further study. Here researchers may identify significant variables that emerged during the research and suggest studies related to these, or the authors may suggest further general questions that their case study generated.

For example, Emig's (1971) study concludes with a section dedicated solely to the topic of implications for further research, in which she suggests several means by which this particular study could have been improved, as well as questions and ideas raised by this study which other researchers might like to address, such as: is there a correlation between a certain personality and a certain composing process profile (e.g. is there a positive correlation between ego strength and persistence in revising)?

Also included in Emig's study is a section dedicated to implications for teaching, which outlines the pedagogical ramifications of the study's findings for teachers currently involved in high school writing programs.

Sharan Merriam (1985) also offers several suggestions for alternative presentations of data:

  • Prepare specialized condensations for appropriate groups.
  • Replace narrative sections with a series of answers to open-ended questions.
  • Present "skimmer's" summaries at beginning of each section.
  • Incorporate headlines that encapsulate information from text.
  • Prepare analytic summaries with supporting data appendixes.
  • Present data in colorful and/or unique graphic representations.

Issues of Validity and Reliability

Once key variables have been identified, they can be analyzed. Reliability becomes a key concern at this stage, and many case study researchers go to great lengths to ensure that their interpretations of the data will be both reliable and valid. Because issues of validity and reliability are an important part of any study in the social sciences, it is important to identify some ways of dealing with results.

Multi-modal case study researchers often balance the results of their coding with data from interviews or writer's reflections upon their own work. Consequently, the researchers' conclusions become highly contextualized. For example, in a case study which looked at the time spent in different stages of the writing process, Berkenkotter concluded that her participant, Donald Murray, spent more time planning his essays than in other writing stages. The report of this case study is followed by Murray's reply, wherein he agrees with some of Berkenkotter's conclusions and disagrees with others.

As is the case with other research methodologies, issues of external validity, construct validity, and reliability need to be carefully considered.

Commentary on Case Studies

Researchers often debate the relative merits of particular methods, among them case study. In this section, we comment on two key issues. To read the commentaries, choose any of the items below:

Strengths and Weaknesses of Case Studies

Most case study advocates point out that case studies produce much more detailed information than what is available through a statistical analysis. Advocates will also hold that while statistical methods might be able to deal with situations where behavior is homogeneous and routine, case studies are needed to deal with creativity, innovation, and context. Detractors argue that case studies are difficult to generalize because of inherent subjectivity and because they are based on qualitative subjective data, generalizable only to a particular context.

Flexibility

The case study approach is a comparatively flexible method of scientific research. Because its project designs seem to emphasize exploration rather than prescription or prediction, researchers are comparatively freer to discover and address issues as they arise in their experiments. In addition, the looser format of case studies allows researchers to begin with broad questions and narrow their focus as their experiment progresses rather than attempt to predict every possible outcome before the experiment is conducted.

Emphasis on Context

By seeking to understand as much as possible about a single subject or small group of subjects, case studies specialize in "deep data," or "thick description"--information based on particular contexts that can give research results a more human face. This emphasis can help bridge the gap between abstract research and concrete practice by allowing researchers to compare their firsthand observations with the quantitative results obtained through other methods of research.

Inherent Subjectivity

"The case study has long been stereotyped as the weak sibling among social science methods," and is often criticized as being too subjective and even pseudo-scientific. Likewise, "investigators who do case studies are often regarded as having deviated from their academic disciplines, and their investigations as having insufficient precision (that is, quantification), objectivity and rigor" (Yin 1989). Opponents cite opportunities for subjectivity in the implementation, presentation, and evaluation of case study research. The approach relies on personal interpretation of data and inferences. Results may not be generalizable, are difficult to test for validity, and rarely offer a problem-solving prescription. Simply put, relying on one or a few subjects as a basis for cognitive extrapolations runs the risk of inferring too much from what might be circumstance.

High Investment

Case studies can involve learning more about the subjects being tested than most researchers would care to know--their educational background, emotional background, perceptions of themselves and their surroundings, their likes, dislikes, and so on. Because of its emphasis on "deep data," the case study is out of reach for many large-scale research projects which look at a subject pool in the tens of thousands. A budget request of $10,000 to examine 200 subjects sounds more efficient than a similar request to examine four subjects.

Ethical Considerations

Researchers conducting case studies should consider certain ethical issues. For example, many educational case studies are often financed by people who have, either directly or indirectly, power over both those being studied and those conducting the investigation (1985). This conflict of interests can hinder the credibility of the study.

The personal integrity, sensitivity, and possible prejudices and/or biases of the investigators need to be taken into consideration as well. Personal biases can creep into how the research is conducted, alternative research methods used, and the preparation of surveys and questionnaires.

A common complaint in case study research is that investigators change direction during the course of the study unaware that their original research design was inadequate for the revised investigation. Thus, the researchers leave unknown gaps and biases in the study. To avoid this, researchers should report preliminary findings so that the likelihood of bias will be reduced.

Concerns about Reliability, Validity, and Generalizability

Merriam (1985) offers several suggestions for how case study researchers might actively combat the popular attacks on the validity, reliability, and generalizability of case studies:

  • Prolong the Processes of Data Gathering on Site: This will help to insure the accuracy of the findings by providing the researcher with more concrete information upon which to formulate interpretations.
  • Employ the Process of "Triangulation": Use a variety of data sources as opposed to relying solely upon one avenue of observation. One example of such a data check would be what McClintock, Brannon, and Maynard (1985) refer to as a "case cluster method," that is, when a single unit within a larger case is randomly sampled, and that data treated quantitatively." For instance, in Emig's (1971) study, the case cluster method was employed, singling out the productivity of a single student named Lynn. This cluster profile included an advanced case history of the subject, specific examination and analysis of individual compositions and protocols, and extensive interview sessions. The seven remaining students were then compared with the case of Lynn, to ascertain if there are any shared, or unique dimensions to the composing process engaged in by these eight students.
  • Conduct Member Checks: Initiate and maintain an active corroboration on the interpretation of data between the researcher and those who provided the data. In other words, talk to your subjects.
  • Collect Referential Materials: Complement the file of materials from the actual site with additional document support. For example, Emig (1971) supports her initial propositions with historical accounts by writers such as T.S. Eliot, James Joyce, and D.H. Lawrence. Emig also cites examples of theoretical research done with regards to the creative process, as well as examples of empirical research dealing with the writing of adolescents. Specific attention is then given to the four stages description of the composing process delineated by Helmoltz, Wallas, and Cowley, as it serves as the focal point in this study.
  • Engage in Peer Consultation: Prior to composing the final draft of the report, researchers should consult with colleagues in order to establish validity through pooled judgment.

Although little can be done to combat challenges concerning the generalizability of case studies, "most writers suggest that qualitative research should be judged as credible and confirmable as opposed to valid and reliable" (Merriam 1985). Likewise, it has been argued that "rather than transplanting statistical, quantitative notions of generalizability and thus finding qualitative research inadequate, it makes more sense to develop an understanding of generalization that is congruent with the basic characteristics of qualitative inquiry" (1985). After all, criticizing the case study method for being ungeneralizable is comparable to criticizing a washing machine for not being able to tell the correct time. In other words, it is unjust to criticize a method for not being able to do something which it was never originally designed to do in the first place.

Annotated Bibliography

Armisted, C. (1984). How Useful are Case Studies. Training and Development Journal, 38 (2), 75-77.

This article looks at eight types of case studies, offers pros and cons of using case studies in the classroom, and gives suggestions for successfully writing and using case studies.

Bardovi-Harlig, K. (1997). Beyond Methods: Components of Second Language Teacher Education . New York: McGraw-Hill.

A compilation of various research essays which address issues of language teacher education. Essays included are: "Non-native reading research and theory" by Lee, "The case for Psycholinguistics" by VanPatten, and "Assessment and Second Language Teaching" by Gradman and Reed.

Bartlett, L. (1989). A Question of Good Judgment; Interpretation Theory and Qualitative Enquiry Address. 70th Annual Meeting of the American Educational Research Association. San Francisco.

Bartlett selected "quasi-historical" methodology, which focuses on the "truth" found in case records, as one that will provide "good judgments" in educational inquiry. He argues that although the method is not comprehensive, it can try to connect theory with practice.

Baydere, S. et. al. (1993). Multimedia conferencing as a tool for collaborative writing: a case study in Computer Supported Collaborative Writing. New York: Springer-Verlag.

The case study by Baydere et. al. is just one of the many essays in this book found in the series "Computer Supported Cooperative Work." Denley, Witefield and May explore similar issues in their essay, "A case study in task analysis for the design of a collaborative document production system."

Berkenkotter, C., Huckin, T., N., & Ackerman J. (1988). Conventions, Conversations, and the Writer: Case Study of a Student in a Rhetoric Ph.D. Program. Research in the Teaching of English, 22, 9-44.

The authors focused on how the writing of their subject, Nate or Ackerman, changed as he became more acquainted or familiar with his field's discourse community.

Berninger, V., W., and Gans, B., M. (1986). Language Profiles in Nonspeaking Individuals of Normal Intelligence with Severe Cerebral Palsy. Augmentative and Alternative Communication, 2, 45-50.

Argues that generalizations about language abilities in patients with severe cerebral palsy (CP) should be avoided. Standardized tests of different levels of processing oral language, of processing written language, and of producing written language were administered to 3 male participants (aged 9, 16, and 40 yrs).

Bockman, J., R., and Couture, B. (1984). The Case Method in Technical Communication: Theory and Models. Texas: Association of Teachers of Technical Writing.

Examines the study and teaching of technical writing, communication of technical information, and the case method in terms of those applications.

Boehrer, J. (1990). Teaching With Cases: Learning to Question. New Directions for Teaching and Learning, 42 41-57.

This article discusses the origins of the case method, looks at the question of what is a case, gives ideas about learning in case teaching, the purposes it can serve in the classroom, the ground rules for the case discussion, including the role of the question, and new directions for case teaching.

Bowman, W. R. (1993). Evaluating JTPA Programs for Economically Disadvantaged Adults: A Case Study of Utah and General Findings . Washington: National Commission for Employment Policy.

"To encourage state-level evaluations of JTPA, the Commission and the State of Utah co-sponsored this report on the effectiveness of JTPA Title II programs for adults in Utah. The technique used is non-experimental and the comparison group was selected from registrants with Utah's Employment Security. In a step-by-step approach, the report documents how non-experimental techniques can be applied and several specific technical issues can be addressed."

Boyce, A. (1993) The Case Study Approach for Pedagogists. Annual Meeting of the American Alliance for Health, Physical Education, Recreation and Dance. (Address). Washington DC.

This paper addresses how case studies 1) bridge the gap between teaching theory and application, 2) enable students to analyze problems and develop solutions for situations that will be encountered in the real world of teaching, and 3) helps students to evaluate the feasibility of alternatives and to understand the ramifications of a particular course of action.

Carson, J. (1993) The Case Study: Ideal Home of WAC Quantitative and Qualitative Data. Annual Meeting of the Conference on College Composition and Communication. (Address). San Diego.

"Increasingly, one of the most pressing questions for WAC advocates is how to keep [WAC] programs going in the face of numerous difficulties. Case histories offer the best chance for fashioning rhetorical arguments to keep WAC programs going because they offer the opportunity to provide a coherent narrative that contextualizes all documents and data, including what is generally considered scientific data. A case study of the WAC program, . . . at Robert Morris College in Pittsburgh demonstrates the advantages of this research method. Such studies are ideal homes for both naturalistic and positivistic data as well as both quantitative and qualitative information."

---. (1991). A Cognitive Process Theory of Writing. College Composition and Communication. 32. 365-87.

No abstract available.

Cromer, R. (1994) A Case Study of Dissociations Between Language and Cognition. Constraints on Language Acquisition: Studies of Atypical Children . Hillsdale: Lawrence Erlbaum Associates, 141-153.

Crossley, M. (1983) Case Study in Comparative and International Education: An Approach to Bridging the Theory-Practice Gap. Proceedings of the 11th Annual Conference of the Australian Comparative and International Education Society. Hamilton, NZ.

Case study research, as presented here, helps bridge the theory-practice gap in comparative and international research studies of education because it focuses on the practical, day-to-day context rather than on the national arena. The paper asserts that the case study method can be valuable at all levels of research, formation, and verification of theories in education.

Daillak, R., H., and Alkin, M., C. (1982). Qualitative Studies in Context: Reflections on the CSE Studies of Evaluation Use . California: EDRS

The report shows how the Center of the Study of Evaluation (CSE) applied qualitative techniques to a study of evaluation information use in local, Los Angeles schools. It critiques the effectiveness and the limitations of using case study, evaluation, field study, and user interview survey methodologies.

Davey, L. (1991). The Application of Case Study Evaluations. ERIC/TM Digest.

This article examines six types of case studies, the type of evaluation questions that can be answered, the functions served, some design features, and some pitfalls of the method.

Deutch, C. E. (1996). A course in research ethics for graduate students. College Teaching, 44, 2, 56-60.

This article describes a one-credit discussion course in research ethics for graduate students in biology. Case studies are focused on within the four parts of the course: 1) major issues, 2 )practical issues in scholarly work, 3) ownership of research results, and 4) training and personal decisions.

DeVoss, G. (1981). Ethics in Fieldwork Research. RIE 27p. (ERIC)

This article examines four of the ethical problems that can happen when conducting case study research: acquiring permission to do research, knowing when to stop digging, the pitfalls of doing collaborative research, and preserving the integrity of the participants.

Driscoll, A. (1985). Case Study of a Research Intervention: the University of Utah’s Collaborative Approach . San Francisco: Far West Library for Educational Research Development.

Paper presented at the annual meeting of the American Association of Colleges of Teacher Education, Denver, CO, March 1985. Offers information of in-service training, specifically case studies application.

Ellram, L. M. (1996). The Use of the Case Study Method in Logistics Research. Journal of Business Logistics, 17, 2, 93.

This article discusses the increased use of case study in business research, and the lack of understanding of when and how to use case study methodology in business.

Emig, J. (1971) The Composing Processes of Twelfth Graders . Urbana: NTCE.

This case study uses observation, tape recordings, writing samples, and school records to show that writing in reflexive and extensive situations caused different lengths of discourse and different clusterings of the components of the writing process.

Feagin, J. R. (1991). A Case For the Case Study . Chapel Hill: The University of North Carolina Press.

This book discusses the nature, characteristics, and basic methodological issues of the case study as a research method.

Feldman, H., Holland, A., & Keefe, K. (1989) Language Abilities after Left Hemisphere Brain Injury: A Case Study of Twins. Topics in Early Childhood Special Education, 9, 32-47.

"Describes the language abilities of 2 twin pairs in which 1 twin (the experimental) suffered brain injury to the left cerebral hemisphere around the time of birth and1 twin (the control) did not. One pair of twins was initially assessed at age 23 mo. and the other at about 30 mo.; they were subsequently evaluated in their homes 3 times at about 6-mo intervals."

Fidel, R. (1984). The Case Study Method: A Case Study. Library and Information Science Research, 6.

The article describes the use of case study methodology to systematically develop a model of online searching behavior in which study design is flexible, subject manner determines data gathering and analyses, and procedures adapt to the study's progressive change.

Flower, L., & Hayes, J. R. (1984). Images, Plans and Prose: The Representation of Meaning in Writing. Written Communication, 1, 120-160.

Explores the ways in which writers actually use different forms of knowing to create prose.

Frey, L. R. (1992). Interpreting Communication Research: A Case Study Approach Englewood Cliffs, N.J.: Prentice Hall.

The book discusses research methodologies in the Communication field. It focuses on how case studies bridge the gap between communication research, theory, and practice.

Gilbert, V. K. (1981). The Case Study as a Research Methodology: Difficulties and Advantages of Integrating the Positivistic, Phenomenological and Grounded Theory Approaches . The Annual Meeting of the Canadian Association for the Study of Educational Administration. (Address) Halifax, NS, Can.

This study on an innovative secondary school in England shows how a "low-profile" participant-observer case study was crucial to the initial observation, the testing of hypotheses, the interpretive approach, and the grounded theory.

Gilgun, J. F. (1994). A Case for Case Studies in Social Work Research. Social Work, 39, 4, 371-381.

This article defines case study research, presents guidelines for evaluation of case studies, and shows the relevance of case studies to social work research. It also looks at issues such as evaluation and interpretations of case studies.

Glennan, S. L., Sharp-Bittner, M. A. & Tullos, D. C. (1991). Augmentative and Alternative Communication Training with a Nonspeaking Adult: Lessons from MH. Augmentative and Alternative Communication, 7, 240-7.

"A response-guided case study documented changes in a nonspeaking 36-yr-old man's ability to communicate using 3 trained augmentative communication modes. . . . Data were collected in videotaped interaction sessions between the nonspeaking adult and a series of adult speaking."

Graves, D. (1981). An Examination of the Writing Processes of Seven Year Old Children. Research in the Teaching of English, 15, 113-134.

Hamel, J. (1993). Case Study Methods . Newbury Park: Sage. .

"In a most economical fashion, Hamel provides a practical guide for producing theoretically sharp and empirically sound sociological case studies. A central idea put forth by Hamel is that case studies must "locate the global in the local" thus making the careful selection of the research site the most critical decision in the analytic process."

Karthigesu, R. (1986, July). Television as a Tool for Nation-Building in the Third World: A Post-Colonial Pattern, Using Malaysia as a Case-Study. International Television Studies Conference. (Address). London, 10-12.

"The extent to which Television Malaysia, as a national mass media organization, has been able to play a role in nation building in the post-colonial period is . . . studied in two parts: how the choice of a model of nation building determines the character of the organization; and how the character of the organization influences the output of the organization."

Kenny, R. (1984). Making the Case for the Case Study. Journal of Curriculum Studies, 16, (1), 37-51.

The article looks at how and why the case study is justified as a viable and valuable approach to educational research and program evaluation.

Knirk, F. (1991). Case Materials: Research and Practice. Performance Improvement Quarterly, 4 (1 ), 73-81.

The article addresses the effectiveness of case studies, subject areas where case studies are commonly used, recent examples of their use, and case study design considerations.

Klos, D. (1976). Students as Case Writers. Teaching of Psychology, 3.2, 63-66.

This article reviews a course in which students gather data for an original case study of another person. The task requires the students to design the study, collect the data, write the narrative, and interpret the findings.

Leftwich, A. (1981). The Politics of Case Study: Problems of Innovation in University Education. Higher Education Review, 13.2, 38-64.

The article discusses the use of case studies as a teaching method. Emphasis is on the instructional materials, interdisciplinarity, and the complex relationships within the university that help or hinder the method.

Mabrito, M. (1991, Oct.). Electronic Mail as a Vehicle for Peer Response: Conversations of High and Low Apprehensive Writers. Written Communication, 509-32.

McCarthy, S., J. (1955). The Influence of Classroom Discourse on Student Texts: The Case of Ella . East Lansing: Institute for Research on Teaching.

A look at how students of color become marginalized within traditional classroom discourse. The essay follows the struggles of one black student: Ella.

Matsuhashi, A., ed. (1987). Writing in Real Time: Modeling Production Processes Norwood, NJ: Ablex Publishing Corporation.

Investigates how writers plan to produce discourse for different purposes to report, to generalize, and to persuade, as well as how writers plan for sentence level units of language. To learn about planning, an observational measure of pause time was used" (ERIC).

Merriam, S. B. (1985). The Case Study in Educational Research: A Review of Selected Literature. Journal of Educational Thought, 19.3, 204-17.

The article examines the characteristics of, philosophical assumptions underlying the case study, the mechanics of conducting a case study, and the concerns about the reliability, validity, and generalizability of the method.

---. (1988). Case Study Research in Education: A Qualitative Approach San Francisco: Jossey Bass.

Merry, S. E., & Milner, N. eds. (1993). The Possibility of Popular Justice: A Case Study of Community Mediation in the United States . Ann Arbor: U of Michigan.

". . . this volume presents a case study of one experiment in popular justice, the San Francisco Community Boards. This program has made an explicit claim to create an alternative justice, or new justice, in the midst of a society ordered by state law. The contributors to this volume explore the history and experience of the program and compare it to other versions of popular justice in the United States, Europe, and the Third World."

Merseth, K. K. (1991). The Case for Cases in Teacher Education. RIE. 42p. (ERIC).

This monograph argues that the case method of instruction offers unique potential for revitalizing the field of teacher education.

Michaels, S. (1987). Text and Context: A New Approach to the Study of Classroom Writing. Discourse Processes, 10, 321-346.

"This paper argues for and illustrates an approach to the study of writing that integrates ethnographic analysis of classroom interaction with linguistic analysis of written texts and teacher/student conversational exchanges. The approach is illustrated through a case study of writing in a single sixth grade classroom during a single writing assignment."

Milburn, G. (1995). Deciphering a Code or Unraveling a Riddle: A Case Study in the Application of a Humanistic Metaphor to the Reporting of Social Studies Teaching. Theory and Research in Education, 13.

This citation serves as an example of how case studies document learning procedures in a senior-level economics course.

Milley, J. E. (1979). An Investigation of Case Study as an Approach to Program Evaluation. 19th Annual Forum of the Association for Institutional Research. (Address). San Diego.

The case study method merged a narrative report focusing on the evaluator as participant-observer with document review, interview, content analysis, attitude questionnaire survey, and sociogram analysis. Milley argues that case study program evaluation has great potential for widespread use.

Minnis, J. R. (1985, Sept.). Ethnography, Case Study, Grounded Theory, and Distance Education Research. Distance Education, 6.2.

This article describes and defines the strengths and weaknesses of ethnography, case study, and grounded theory.

Nunan, D. (1992). Collaborative language learning and teaching . New York: Cambridge University Press.

Included in this series of essays is Peter Sturman’s "Team Teaching: a case study from Japan" and David Nunan’s own "Toward a collaborative approach to curriculum development: a case study."

Nystrand, M., ed. (1982). What Writers Know: The Language, Process, and Structure of Written Discourse . New York: Academic Press.

Owenby, P. H. (1992). Making Case Studies Come Alive. Training, 29, (1), 43-46. (ERIC)

This article provides tips for writing more effective case studies.

---. (1981). Pausing and Planning: The Tempo of Writer Discourse Production. Research in the Teaching of English, 15 (2),113-34.

Perl, S. (1979). The Composing Processes of Unskilled College Writers. Research in the Teaching of English, 13, 317-336.

"Summarizes a study of five unskilled college writers, focusing especially on one of the five, and discusses the findings in light of current pedagogical practice and research design."

Pilcher J. and A. Coffey. eds. (1996). Gender and Qualitative Research . Brookfield: Aldershot, Hants, England.

This book provides a series of essays which look at gender identity research, qualitative research and applications of case study to questions of gendered pedagogy.

Pirie, B. S. (1993). The Case of Morty: A Four Year Study. Gifted Education International, 9 (2), 105-109.

This case study describes a boy from kindergarten through third grade with above average intelligence but difficulty in learning to read, write, and spell.

Popkewitz, T. (1993). Changing Patterns of Power: Social Regulation and Teacher Education Reform. Albany: SUNY Press.

Popkewitz edits this series of essays that address case studies on educational change and the training of teachers. The essays vary in terms of discipline and scope. Also, several authors include case studies of educational practices in countries other than the United States.

---. (1984). The Predrafting Processes of Four High- and Four Low Apprehensive Writers. Research in the Teaching of English, 18, (1), 45-64.

Rasmussen, P. (1985, March) A Case Study on the Evaluation of Research at the Technical University of Denmark. International Journal of Institutional Management in Higher Education, 9 (1).

This is an example of a case study methodology used to evaluate the chemistry and chemical engineering departments at the University of Denmark.

Roth, K. J. (1986). Curriculum Materials, Teacher Talk, and Student Learning: Case Studies in Fifth-Grade Science Teaching . East Lansing: Institute for Research on Teaching.

Roth offers case studies on elementary teachers, elementary school teaching, science studies and teaching, and verbal learning.

Selfe, C. L. (1985). An Apprehensive Writer Composes. When a Writer Can't Write: Studies in Writer's Block and Other Composing-Process Problems . (pp. 83-95). Ed. Mike Rose. NMY: Guilford.

Smith-Lewis, M., R. and Ford, A. (1987). A User's Perspective on Augmentative Communication. Augmentative and Alternative Communication, 3, 12-7.

"During a series of in-depth interviews, a 25-yr-old woman with cerebral palsy who utilized augmentative communication reflected on the effectiveness of the devices designed for her during her school career."

St. Pierre, R., G. (1980, April). Follow Through: A Case Study in Metaevaluation Research . 64th Annual Meeting of the American Educational Research Association. (Address).

The three approaches to metaevaluation are evaluation of primary evaluations, integrative meta-analysis with combined primary evaluation results, and re-analysis of the raw data from a primary evaluation.

Stahler, T., M. (1996, Feb.) Early Field Experiences: A Model That Worked. ERIC.

"This case study of a field and theory class examines a model designed to provide meaningful field experiences for preservice teachers while remaining consistent with the instructor's beliefs about the role of teacher education in preparing teachers for the classroom."

Stake, R. E. (1995). The Art of Case Study Research. Thousand Oaks: Sage Publications.

This book examines case study research in education and case study methodology.

Stiegelbauer, S. (1984) Community, Context, and Co-curriculum: Situational Factors Influencing School Improvements in a Study of High Schools. Presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Discussion of several case studies: one looking at high school environments, another examining educational innovations.

Stolovitch, H. (1990). Case Study Method. Performance And Instruction, 29, (9), 35-37.

This article describes the case study method as a form of simulation and presents guidelines for their use in professional training situations.

Thaller, E. (1994). Bibliography for the Case Method: Using Case Studies in Teacher Education. RIE. 37 p.

This bibliography presents approximately 450 citations on the use of case studies in teacher education from 1921-1993.

Thrane, T. (1986). On Delimiting the Senses of Near-Synonyms in Historical Semantics: A Case Study of Adjectives of 'Moral Sufficiency' in the Old English Andreas. Linguistics Across Historical and Geographical Boundaries: In Honor of Jacek Fisiak on the Occasion of his Fiftieth Birthday . Berlin: Mouton de Gruyter.

United Nations. (1975). Food and Agriculture Organization. Report on the FAO/UNFPA Seminar on Methodology, Research and Country: Case Studies on Population, Employment and Productivity . Rome: United Nations.

This example case study shows how the methodology can be used in a demographic and psychographic evaluation. At the same time, it discusses the formation and instigation of the case study methodology itself.

Van Vugt, J. P., ed. (1994). Aids Prevention and Services: Community Based Research . Westport: Bergin and Garvey.

"This volume has been five years in the making. In the process, some of the policy applications called for have met with limited success, such as free needle exchange programs in a limited number of American cities, providing condoms to prison inmates, and advertisements that depict same-sex couples. Rather than dating our chapters that deal with such subjects, such policy applications are verifications of the type of research demonstrated here. Furthermore, they indicate the critical need to continue community based research in the various communities threatened by acquired immuno-deficiency syndrome (AIDS) . . . "

Welch, W., ed. (1981, May). Case Study Methodology in Educational Evaluation. Proceedings of the Minnesota Evaluation Conference. Minnesota. (Address).

The four papers in these proceedings provide a comprehensive picture of the rationale, methodology, strengths, and limitations of case studies.

Williams, G. (1987). The Case Method: An Approach to Teaching and Learning in Educational Administration. RIE, 31p.

This paper examines the viability of the case method as a teaching and learning strategy in instructional systems geared toward the training of personnel of the administration of various aspects of educational systems.

Yin, R. K. (1993). Advancing Rigorous Methodologies: A Review of 'Towards Rigor in Reviews of Multivocal Literatures.' Review of Educational Research, 61, (3).

"R. T. Ogawa and B. Malen's article does not meet its own recommended standards for rigorous testing and presentation of its own conclusions. Use of the exploratory case study to analyze multivocal literatures is not supported, and the claim of grounded theory to analyze multivocal literatures may be stronger."

---. (1989). Case Study Research: Design and Methods. London: Sage Publications Inc.

This book discusses in great detail, the entire design process of the case study, including entire chapters on collecting evidence, analyzing evidence, composing the case study report, and designing single and multiple case studies.

Related Links

Consider the following list of related Web sites for more information on the topic of case study research. Note: although many of the links cover the general category of qualitative research, all have sections that address issues of case studies.

  • Sage Publications on Qualitative Methodology: Search here for a comprehensive list of new books being published about "Qualitative Methodology" http://www.sagepub.co.uk/
  • The International Journal of Qualitative Studies in Education: An on-line journal "to enhance the theory and practice of qualitative research in education." On-line submissions are welcome. http://www.tandf.co.uk/journals/tf/09518398.html
  • Qualitative Research Resources on the Internet: From syllabi to home pages to bibliographies. All links relate somehow to qualitative research. http://www.nova.edu/ssss/QR/qualres.html

Citation Information

Bronwyn Becker, Patrick Dawson, Karen Devine, Carla Hannum, Steve Hill, Jon Leydens, Debbie Matuskevich, Carol Traver, and Mike Palmquist. (1994-2024). Case Studies. The WAC Clearinghouse. Colorado State University. Available at https://wac.colostate.edu/repository/writing/guides/.

Copyright Information

Copyright © 1994-2024 Colorado State University and/or this site's authors, developers, and contributors . Some material displayed on this site is used with permission.

Library Research Guides - University of Wisconsin Ebling Library

Uw-madison libraries research guides.

  • Course Guides
  • Subject Guides
  • University of Wisconsin-Madison
  • Research Guides
  • Nursing Resources
  • Types of Research within Qualitative and Quantitative

Nursing Resources : Types of Research within Qualitative and Quantitative

  • Definitions of
  • Professional Organizations
  • Nursing Informatics
  • Nursing Related Apps
  • EBP Resources
  • PICO-Clinical Question
  • Types of PICO Question (D, T, P, E)
  • Secondary & Guidelines
  • Bedside--Point of Care
  • Pre-processed Evidence
  • Measurement Tools, Surveys, Scales
  • Types of Studies
  • Table of Evidence
  • Qualitative vs Quantitative
  • Cohort vs Case studies
  • Independent Variable VS Dependent Variable
  • Sampling Methods and Statistics
  • Systematic Reviews
  • Review vs Systematic Review vs ETC...
  • Standard, Guideline, Protocol, Policy
  • Additional Guidelines Sources
  • Peer Reviewed Articles
  • Conducting a Literature Review
  • Systematic Reviews and Meta-Analysis
  • Writing a Research Paper or Poster
  • Annotated Bibliographies
  • Levels of Evidence (I-VII)
  • Reliability
  • Validity Threats
  • Threats to Validity of Research Designs
  • Nursing Theory
  • Nursing Models
  • PRISMA, RevMan, & GRADEPro
  • ORCiD & NIH Submission System
  • Understanding Predatory Journals
  • Nursing Scope & Standards of Practice, 4th Ed
  • Distance Ed & Scholarships
  • Assess A Quantitative Study?
  • Assess A Qualitative Study?
  • Find Health Statistics?
  • Choose A Citation Manager?
  • Find Instruments, Measurements, and Tools
  • Write a CV for a DNP or PhD?
  • Find information about graduate programs?
  • Learn more about Predatory Journals
  • Get writing help?
  • Choose a Citation Manager?
  • Other questions you may have
  • Search the Databases?
  • Get Grad School information?

Aspects of Quantative (Empirical) Research

♦   Statement of purpose—what was studied and why.

  ♦   Description of the methodology (experimental group, control group, variables, test conditions, test subjects, etc.).

  ♦   Results (usually numeric in form presented in tables or graphs, often with statistical analysis).

♦   Conclusions drawn from the results.

  ♦   Footnotes, a bibliography, author credentials.

Hint: the abstract (summary) of an article is the first place to check for most of the above features.  The abstract appears both in the database you search and at the top of the actual article.

Types of Quantitative Research

There are four (4) main types of quantitative designs: descriptive, correlational, quasi-experimental, and experimental.

samples.jbpub.com/9780763780586/80586_CH03_Keele.pdf

Types of Qualitative Research

http://wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm

  • << Previous: Qualitative vs Quantitative
  • Next: Cohort vs Case studies >>
  • Last Updated: Nov 30, 2023 2:31 PM
  • URL: https://researchguides.library.wisc.edu/nursing

We use essential cookies to make Venngage work. By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.

Manage Cookies

Cookies and similar technologies collect certain information about how you’re using our website. Some of them are essential, and without them you wouldn’t be able to use Venngage. But others are optional, and you get to choose whether we use them or not.

Strictly Necessary Cookies

These cookies are always on, as they’re essential for making Venngage work, and making it safe. Without these cookies, services you’ve asked for can’t be provided.

Show cookie providers

  • Google Login

Functionality Cookies

These cookies help us provide enhanced functionality and personalisation, and remember your settings. They may be set by us or by third party providers.

Performance Cookies

These cookies help us analyze how many people are using Venngage, where they come from and how they're using it. If you opt out of these cookies, we can’t get feedback to make Venngage better for you and all our users.

  • Google Analytics

Targeting Cookies

These cookies are set by our advertising partners to track your activity and show you relevant Venngage ads on other sites as you browse the internet.

  • Google Tag Manager
  • Infographics
  • Daily Infographics
  • Graphic Design
  • Graphs and Charts
  • Data Visualization
  • Human Resources
  • Training and Development
  • Beginner Guides

Blog Beginner Guides

What is a Case Study? [+6 Types of Case Studies]

By Ronita Mohan , Sep 20, 2021

What is a Case Study Blog Header

Case studies have become powerful business tools. But what is a case study? What are the benefits of creating one? Are there limitations to the format?

If you’ve asked yourself these questions, our helpful guide will clear things up. Learn how to use a case study for business. Find out how cases analysis works in psychology and research.

We’ve also got examples of case studies to inspire you.

Haven’t made a case study before? You can easily  create a case study  with Venngage’s customizable templates.

CREATE A CASE STUDY

Click to jump ahead:

What is a case study, what is the case study method, benefits of case studies, limitations of case studies, types of case studies, faqs about case studies.

Case studies are research methodologies. They examine subjects, projects, or organizations to tell a story.

Case Study Definition LinkedIn Post

USE THIS TEMPLATE

Numerous sectors use case analyses. The social sciences, social work, and psychology create studies regularly.

Healthcare industries write reports on patients and diagnoses. Marketing case study examples , like the one below, highlight the benefits of a business product.

Bold Social Media Business Case Study Template

CREATE THIS REPORT TEMPLATE

Now that you know what a case study is, we explain how case reports are used in three different industries.

What is a business case study?

A business or marketing case study aims at showcasing a successful partnership. This can be between a brand and a client. Or the case study can examine a brand’s project.

There is a perception that case studies are used to advertise a brand. But effective reports, like the one below, can show clients how a brand can support them.

Light Simple Business Case Study Template

Hubspot created a case study on a customer that successfully scaled its business. The report outlines the various Hubspot tools used to achieve these results.

Hubspot case study

Hubspot also added a video with testimonials from the client company’s employees.

So, what is the purpose of a case study for businesses? There is a lot of competition in the corporate world. Companies are run by people. They can be on the fence about which brand to work with.

Business reports  stand out aesthetically, as well. They use  brand colors  and brand fonts . Usually, a combination of the client’s and the brand’s.

With the Venngage  My Brand Kit  feature, businesses can automatically apply their brand to designs.

A business case study, like the one below, acts as social proof. This helps customers decide between your brand and your competitors.

Modern lead Generation Business Case Study Template

Don’t know how to design a report? You can learn  how to write a case study  with Venngage’s guide. We also share design tips and examples that will help you convert.

Related: 55+ Annual Report Design Templates, Inspirational Examples & Tips [Updated]

What is a case study in psychology?

In the field of psychology, case studies focus on a particular subject. Psychology case histories also examine human behaviors.

Case reports search for commonalities between humans. They are also used to prescribe further research. Or these studies can elaborate on a solution for a behavioral ailment.

The American Psychology Association  has a number of case studies on real-life clients. Note how the reports are more text-heavy than a business case study.

What is a case study in psychology? Behavior therapy example

Famous psychologists such as Sigmund Freud and Anna O popularised the use of case studies in the field. They did so by regularly interviewing subjects. Their detailed observations build the field of psychology.

It is important to note that psychological studies must be conducted by professionals. Psychologists, psychiatrists and therapists should be the researchers in these cases.

Related: What Netflix’s Top 50 Shows Can Teach Us About Font Psychology [Infographic]

What is a case study in research?

Research is a necessary part of every case study. But specific research fields are required to create studies. These fields include user research, healthcare, education, or social work.

For example, this UX Design  report examined the public perception of a client. The brand researched and implemented new visuals to improve it. The study breaks down this research through lessons learned.

What is a case study in research? UX Design case study example

Clinical reports are a necessity in the medical field. These documents are used to share knowledge with other professionals. They also help examine new or unusual diseases or symptoms.

The pandemic has led to a significant increase in research. For example,  Spectrum Health  studied the value of health systems in the pandemic. They created the study by examining community outreach.

What is a case study in research? Spectrum healthcare example

The pandemic has significantly impacted the field of education. This has led to numerous examinations on remote studying. There have also been studies on how students react to decreased peer communication.

Social work case reports often have a community focus. They can also examine public health responses. In certain regions, social workers study disaster responses.

You now know what case studies in various fields are. In the next step of our guide, we explain the case study method.

Return to Table of Contents

A case analysis is a deep dive into a subject. To facilitate this case studies are built on interviews and observations. The below example would have been created after numerous interviews.

Case studies are largely qualitative. They analyze and describe phenomena. While some data is included, a case analysis is not quantitative.

There are a few steps in the case method. You have to start by identifying the subject of your study. Then determine what kind of research is required.

In natural sciences, case studies can take years to complete. Business reports, like this one, don’t take that long. A few weeks of interviews should be enough.

Blue Simple Business Case Study Template

The case method will vary depending on the industry. Reports will also look different once produced.

As you will have seen, business reports are more colorful. The design is also more accessible . Healthcare and psychology reports are more text-heavy.

Designing case reports takes time and energy. So, is it worth taking the time to write them? Here are the benefits of creating case studies.

  • Collects large amounts of information
  • Helps formulate hypotheses
  • Builds the case for further research
  • Discovers new insights into a subject
  • Builds brand trust and loyalty
  • Engages customers through stories

For example, the business study below creates a story around a brand partnership. It makes for engaging reading. The study also shows evidence backing up the information.

Blue Content Marketing Case Study Template

We’ve shared the benefits of why studies are needed. We will also look at the limitations of creating them.

Related: How to Present a Case Study like a Pro (With Examples)

There are a few disadvantages to conducting a case analysis. The limitations will vary according to the industry.

  • Responses from interviews are subjective
  • Subjects may tailor responses to the researcher
  • Studies can’t always be replicated
  • In certain industries, analyses can take time and be expensive
  • Risk of generalizing the results among a larger population

These are some of the common weaknesses of creating case reports. If you’re on the fence, look at the competition in your industry.

Other brands or professionals are building reports, like this example. In that case, you may want to do the same.

Coral content marketing case study template

There are six common types of case reports. Depending on your industry, you might use one of these types.

Descriptive case studies

Explanatory case studies, exploratory case reports, intrinsic case studies, instrumental case studies, collective case reports.

6 Types Of Case Studies List

USE THIS TEMPLATE

We go into more detail about each type of study in the guide below.

Related:  15+ Professional Case Study Examples [Design Tips + Templates]

When you have an existing hypothesis, you can design a descriptive study. This type of report starts with a description. The aim is to find connections between the subject being studied and a theory.

Once these connections are found, the study can conclude. The results of this type of study will usually suggest how to develop a theory further.

A study like the one below has concrete results. A descriptive report would use the quantitative data as a suggestion for researching the subject deeply.

Lead generation business case study template

When an incident occurs in a field, an explanation is required. An explanatory report investigates the cause of the event. It will include explanations for that cause.

The study will also share details about the impact of the event. In most cases, this report will use evidence to predict future occurrences. The results of explanatory reports are definitive.

Note that there is no room for interpretation here. The results are absolute.

The study below is a good example. It explains how one brand used the services of another. It concludes by showing definitive proof that the collaboration was successful.

Bold Content Marketing Case Study Template

Another example of this study would be in the automotive industry. If a vehicle fails a test, an explanatory study will examine why. The results could show that the failure was because of a particular part.

Related: How to Write a Case Study [+ Design Tips]

An explanatory report is a self-contained document. An exploratory one is only the beginning of an investigation.

Exploratory cases act as the starting point of studies. This is usually conducted as a precursor to large-scale investigations. The research is used to suggest why further investigations are needed.

An exploratory study can also be used to suggest methods for further examination.

For example, the below analysis could have found inconclusive results. In that situation, it would be the basis for an in-depth study.

Teal Social Media Business Case Study Template

Intrinsic studies are more common in the field of psychology. These reports can also be conducted in healthcare or social work.

These types of studies focus on a unique subject, such as a patient. They can sometimes study groups close to the researcher.

The aim of such studies is to understand the subject better. This requires learning their history. The researcher will also examine how they interact with their environment.

For instance, if the case study below was about a unique brand, it could be an intrinsic study.

Vibrant Content Marketing Case Study Template

Once the study is complete, the researcher will have developed a better understanding of a phenomenon. This phenomenon will likely not have been studied or theorized about before.

Examples of intrinsic case analysis can be found across psychology. For example, Jean Piaget’s theories on cognitive development. He established the theory from intrinsic studies into his own children.

Related: What Disney Villains Can Tell Us About Color Psychology [Infographic]

This is another type of study seen in medical and psychology fields. Instrumental reports are created to examine more than just the primary subject.

When research is conducted for an instrumental study, it is to provide the basis for a larger phenomenon. The subject matter is usually the best example of the phenomenon. This is why it is being studied.

Purple SAAS Business Case Study Template

Assume it’s examining lead generation strategies. It may want to show that visual marketing is the definitive lead generation tool. The brand can conduct an instrumental case study to examine this phenomenon.

Collective studies are based on instrumental case reports. These types of studies examine multiple reports.

There are a number of reasons why collective reports are created:

  • To provide evidence for starting a new study
  • To find pattens between multiple instrumental reports
  • To find differences in similar types of cases
  • Gain a deeper understanding of a complex phenomenon
  • Understand a phenomenon from diverse contexts

A researcher could use multiple reports, like the one below, to build a collective case report.

Social Media Business Case Study template

Related: 10+ Case Study Infographic Templates That Convert

What makes a case study a case study?

A case study has a very particular research methodology. They are an in-depth study of a person or a group of individuals. They can also study a community or an organization. Case reports examine real-world phenomena within a set context.

How long should a case study be?

The length of studies depends on the industry. It also depends on the story you’re telling. Most case studies should be at least 500-1500 words long. But you can increase the length if you have more details to share.

What should you ask in a case study?

The one thing you shouldn’t ask is ‘yes’ or ‘no’ questions. Case studies are qualitative. These questions won’t give you the information you need.

Ask your client about the problems they faced. Ask them about solutions they found. Or what they think is the ideal solution. Leave room to ask them follow-up questions. This will help build out the study.

How to present a case study?

When you’re ready to present a case study, begin by providing a summary of the problem or challenge you were addressing. Follow this with an outline of the solution you implemented, and support this with the results you achieved, backed by relevant data. Incorporate visual aids like slides, graphs, and images to make your case study presentation more engaging and impactful.

Now you know what a case study means, you can begin creating one. These reports are a great tool for analyzing brands. They are also useful in a variety of other fields.

Use a visual communication platform like Venngage to design case studies. With Venngage’s templates, you can design easily. Create branded, engaging reports, all without design experience.

Banner

MFT 204: MFT 204: INDIVIDUAL AND FAMILY LIFE CYCLE DEVELOPMENT Bosley 2024

  • Literature review
  • Using PICO to Determine Search Terms
  • Find Articles
  • Search strategies
  • Quantitative vs. Qualitative Research
  • Evidence Based Practice
  • Find Journals
  • Citation: APA 7

Mixed Methods Research

As its name suggests, mixed methods research involves using elements of both quantitative and qualitative research methods. Using mixed methods, a researcher can more fully explore a research question and provide greater insight. 

Need to find quantitative or qualitative research?

The CINAHL and PsycINFO databases both allow for the application of filters that will yield results that are either qualitative or quantitative in nature. 

For detailed information about how to do that in CINAHL  or PsycINFO, visit the Quantitative and Qualitative LibGuide found here.   

What is Qualitative Research?

Quantitative research gathers data that can be measured numerically and analyzed mathematically. Quantitative research attempts to answer research questions through the quantification of data. 

Indicators of quantitative research include:

contains statistical analysis 

large sample size 

objective - little room to argue with the numbers 

types of research: descriptive studies, exploratory studies, experimental studies, explanatory studies, predictive studies, clinical trials 

What is Quantitative Research?

Qualitative research is based upon data that is gathered by observation. Qualitative research articles will attempt to answer questions that cannot be measured by numbers but rather by perceived meaning. Qualitative research will likely include interviews, case studies, ethnography, or focus groups. 

Indicators of qualitative research include:

interviews or focus groups 

small sample size 

subjective - researchers are often interpreting meaning 

methods used: phenomenology, ethnography, grounded theory, historical method, case study 

  • << Previous: Search strategies
  • Next: Evidence Based Practice >>
  • Last Updated: Feb 14, 2024 9:50 AM
  • URL: https://libguides.hofstra.edu/MFT204

Hofstra University

This site is compliant with the W3C-WAI Web Content Accessibility Guidelines HOFSTRA UNIVERSITY Hempstead, NY 11549-1000 (516) 463-6600 © 2000-2009 Hofstra University

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Front Psychol

Quantitative and Qualitative Approaches to Generalization and Replication–A Representationalist View

In this paper, we provide a re-interpretation of qualitative and quantitative modeling from a representationalist perspective. In this view, both approaches attempt to construct abstract representations of empirical relational structures. Whereas quantitative research uses variable-based models that abstract from individual cases, qualitative research favors case-based models that abstract from individual characteristics. Variable-based models are usually stated in the form of quantified sentences (scientific laws). This syntactic structure implies that sentences about individual cases are derived using deductive reasoning. In contrast, case-based models are usually stated using context-dependent existential sentences (qualitative statements). This syntactic structure implies that sentences about other cases are justifiable by inductive reasoning. We apply this representationalist perspective to the problems of generalization and replication. Using the analytical framework of modal logic, we argue that the modes of reasoning are often not only applied to the context that has been studied empirically, but also on a between-contexts level. Consequently, quantitative researchers mostly adhere to a top-down strategy of generalization, whereas qualitative researchers usually follow a bottom-up strategy of generalization. Depending on which strategy is employed, the role of replication attempts is very different. In deductive reasoning, replication attempts serve as empirical tests of the underlying theory. Therefore, failed replications imply a faulty theory. From an inductive perspective, however, replication attempts serve to explore the scope of the theory. Consequently, failed replications do not question the theory per se , but help to shape its boundary conditions. We conclude that quantitative research may benefit from a bottom-up generalization strategy as it is employed in most qualitative research programs. Inductive reasoning forces us to think about the boundary conditions of our theories and provides a framework for generalization beyond statistical testing. In this perspective, failed replications are just as informative as successful replications, because they help to explore the scope of our theories.

Introduction

Qualitative and quantitative research strategies have long been treated as opposing paradigms. In recent years, there have been attempts to integrate both strategies. These “mixed methods” approaches treat qualitative and quantitative methodologies as complementary, rather than opposing, strategies (Creswell, 2015 ). However, whilst acknowledging that both strategies have their benefits, this “integration” remains purely pragmatic. Hence, mixed methods methodology does not provide a conceptual unification of the two approaches.

Lacking a common methodological background, qualitative and quantitative research methodologies have developed rather distinct standards with regard to the aims and scope of empirical science (Freeman et al., 2007 ). These different standards affect the way researchers handle contradictory empirical findings. For example, many empirical findings in psychology have failed to replicate in recent years (Klein et al., 2014 ; Open Science, Collaboration, 2015 ). This “replication crisis” has been discussed on statistical, theoretical and social grounds and continues to have a wide impact on quantitative research practices like, for example, open science initiatives, pre-registered studies and a re-evaluation of statistical significance testing (Everett and Earp, 2015 ; Maxwell et al., 2015 ; Shrout and Rodgers, 2018 ; Trafimow, 2018 ; Wiggins and Chrisopherson, 2019 ).

However, qualitative research seems to be hardly affected by this discussion. In this paper, we argue that the latter is a direct consequence of how the concept of generalizability is conceived in the two approaches. Whereas most of quantitative psychology is committed to a top-down strategy of generalization based on the idea of random sampling from an abstract population, qualitative studies usually rely on a bottom-up strategy of generalization that is grounded in the successive exploration of the field by means of theoretically sampled cases.

Here, we show that a common methodological framework for qualitative and quantitative research methodologies is possible. We accomplish this by introducing a formal description of quantitative and qualitative models from a representationalist perspective: both approaches can be reconstructed as special kinds of representations for empirical relational structures. We then use this framework to analyze the generalization strategies used in the two approaches. These turn out to be logically independent of the type of model. This has wide implications for psychological research. First, a top-down generalization strategy is compatible with a qualitative modeling approach. This implies that mainstream psychology may benefit from qualitative methods when a numerical representation turns out to be difficult or impossible, without the need to commit to a “qualitative” philosophy of science. Second, quantitative research may exploit the bottom-up generalization strategy that is inherent to many qualitative approaches. This offers a new perspective on unsuccessful replications by treating them not as scientific failures, but as a valuable source of information about the scope of a theory.

The Quantitative Strategy–Numbers and Functions

Quantitative science is about finding valid mathematical representations for empirical phenomena. In most cases, these mathematical representations have the form of functional relations between a set of variables. One major challenge of quantitative modeling consists in constructing valid measures for these variables. Formally, to measure a variable means to construct a numerical representation of the underlying empirical relational structure (Krantz et al., 1971 ). For example, take the behaviors of a group of students in a classroom: “to listen,” “to take notes,” and “to ask critical questions.” One may now ask whether is possible to assign numbers to the students, such that the relations between the assigned numbers are of the same kind as the relations between the values of an underlying variable, like e.g., “engagement.” The observed behaviors in the classroom constitute an empirical relational structure, in the sense that for every student-behavior tuple, one can observe whether it is true or not. These observations can be represented in a person × behavior matrix 1 (compare Figure 1 ). Given this relational structure satisfies certain conditions (i.e., the axioms of a measurement model), one can assign numbers to the students and the behaviors, such that the relations between the numbers resemble the corresponding numerical relations. For example, if there is a unique ordering in the empirical observations with regard to which person shows which behavior, the assigned numbers have to constitute a corresponding unique ordering, as well. Such an ordering coincides with the person × behavior matrix forming a triangle shaped relation and is formally represented by a Guttman scale (Guttman, 1944 ). There are various measurement models available for different empirical structures (Suppes et al., 1971 ). In the case of probabilistic relations, Item-Response models may be considered as a special kind of measurement model (Borsboom, 2005 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-12-605191-g0001.jpg

Constructing a numerical representation from an empirical relational structure; Due to the unique ordering of persons with regard to behaviors (indicated by the triangular shape of the relation), it is possible to construct a Guttman scale by assigning a number to each of the individuals, representing the number of relevant behaviors shown by the individual. The resulting variable (“engagement”) can then be described by means of statistical analyses, like, e.g., plotting the frequency distribution.

Although essential, measurement is only the first step of quantitative modeling. Consider a slightly richer empirical structure, where we observe three additional behaviors: “to doodle,” “to chat,” and “to play.” Like above, one may ask, whether there is a unique ordering of the students with regard to these behaviors that can be represented by an underlying variable (i.e., whether the matrix forms a Guttman scale). If this is the case, we may assign corresponding numbers to the students and call this variable “distraction.” In our example, such a representation is possible. We can thus assign two numbers to each student, one representing his or her “engagement” and one representing his or her “distraction” (compare Figure 2 ). These measurements can now be used to construct a quantitative model by relating the two variables by a mathematical function. In the simplest case, this may be a linear function. This functional relation constitutes a quantitative model of the empirical relational structure under study (like, e.g., linear regression). Given the model equation and the rules for assigning the numbers (i.e., the instrumentations of the two variables), the set of admissible empirical structures is limited from all possible structures to a rather small subset. This constitutes the empirical content of the model 2 (Popper, 1935 ).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-12-605191-g0002.jpg

Constructing a numerical model from an empirical relational structure; Since there are two distinct classes of behaviors that each form a Guttman scale, it is possible to assign two numbers to each individual, correspondingly. The resulting variables (“engagement” and “distraction”) can then be related by a mathematical function, which is indicated by the scatterplot and red line on the right hand side.

The Qualitative Strategy–Categories and Typologies

The predominant type of analysis in qualitative research consists in category formation. By constructing descriptive systems for empirical phenomena, it is possible to analyze the underlying empirical structure at a higher level of abstraction. The resulting categories (or types) constitute a conceptual frame for the interpretation of the observations. Qualitative researchers differ considerably in the way they collect and analyze data (Miles et al., 2014 ). However, despite the diverse research strategies followed by different qualitative methodologies, from a formal perspective, most approaches build on some kind of categorization of cases that share some common features. The process of category formation is essential in many qualitative methodologies, like, for example, qualitative content analysis, thematic analysis, grounded theory (see Flick, 2014 for an overview). Sometimes these features are directly observable (like in our classroom example), sometimes they are themselves the result of an interpretative process (e.g., Scheunpflug et al., 2016 ).

In contrast to quantitative methodologies, there have been little attempts to formalize qualitative research strategies (compare, however, Rihoux and Ragin, 2009 ). However, there are several statistical approaches to non-numerical data that deal with constructing abstract categories and establishing relations between these categories (Agresti, 2013 ). Some of these methods are very similar to qualitative category formation on a conceptual level. For example, cluster analysis groups cases into homogenous categories (clusters) based on their similarity on a distance metric.

Although category formation can be formalized in a mathematically rigorous way (Ganter and Wille, 1999 ), qualitative research hardly acknowledges these approaches. 3 However, in order to find a common ground with quantitative science, it is certainly helpful to provide a formal interpretation of category systems.

Let us reconsider the above example of students in a classroom. The quantitative strategy was to assign numbers to the students with regard to variables and to relate these variables via a mathematical function. We can analyze the same empirical structure by grouping the behaviors to form abstract categories. If the aim is to construct an empirically valid category system, this grouping is subject to constraints, analogous to those used to specify a measurement model. The first and most important constraint is that the behaviors must form equivalence classes, i.e., within categories, behaviors need to be equivalent, and across categories, they need to be distinct (formally, the relational structure must obey the axioms of an equivalence relation). When objects are grouped into equivalence classes, it is essential to specify the criterion for empirical equivalence. In qualitative methodology, this is sometimes referred to as the tertium comparationis (Flick, 2014 ). One possible criterion is to group behaviors such that they constitute a set of specific common attributes of a group of people. In our example, we might group the behaviors “to listen,” “to take notes,” and “to doodle,” because these behaviors are common to the cases B, C, and D, and they are also specific for these cases, because no other person shows this particular combination of behaviors. The set of common behaviors then forms an abstract concept (e.g., “moderate distraction”), while the set of persons that show this configuration form a type (e.g., “the silent dreamer”). Formally, this means to identify the maximal rectangles in the underlying empirical relational structure (see Figure 3 ). This procedure is very similar to the way we constructed a Guttman scale, the only difference being that we now use different aspects of the empirical relational structure. 4 In fact, the set of maximal rectangles can be determined by an automated algorithm (Ganter, 2010 ), just like the dimensionality of an empirical structure can be explored by psychometric scaling methods. Consequently, we can identify the empirical content of a category system or a typology as the set of empirical structures that conforms to it. 5 Whereas the quantitative strategy was to search for scalable sub-matrices and then relate the constructed variables by a mathematical function, the qualitative strategy is to construct an empirical typology by grouping cases based on their specific similarities. These types can then be related to one another by a conceptual model that describes their semantic and empirical overlap (see Figure 3 , right hand side).

An external file that holds a picture, illustration, etc.
Object name is fpsyg-12-605191-g0003.jpg

Constructing a conceptual model from an empirical relational structure; Individual behaviors are grouped to form abstract types based on them being shared among a specific subset of the cases. Each type constitutes a set of specific commonalities of a class of individuals (this is indicated by the rectangles on the left hand side). The resulting types (“active learner,” “silent dreamer,” “distracted listener,” and “troublemaker”) can then be related to one another to explicate their semantic and empirical overlap, as indicated by the Venn-diagram on the right hand side.

Variable-Based Models and Case-Based Models

In the previous section, we have argued that qualitative category formation and quantitative measurement can both be characterized as methods to construct abstract representations of empirical relational structures. Instead of focusing on different philosophical approaches to empirical science, we tried to stress the formal similarities between both approaches. However, it is worth also exploring the dissimilarities from a formal perspective.

Following the above analysis, the quantitative approach can be characterized by the use of variable-based models, whereas the qualitative approach is characterized by case-based models (Ragin, 1987 ). Formally, we can identify the rows of an empirical person × behavior matrix with a person-space, and the columns with a corresponding behavior-space. A variable-based model abstracts from the single individuals in a person-space to describe the structure of behaviors on a population level. A case-based model, on the contrary, abstracts from the single behaviors in a behavior-space to describe individual case configurations on the level of abstract categories (see Table 1 ).

Variable-based models and case-based models.

From a representational perspective, there is no a priori reason to favor one type of model over the other. Both approaches provide different analytical tools to construct an abstract representation of an empirical relational structure. However, since the two modeling approaches make use of different information (person-space vs. behavior-space), this comes with some important implications for the researcher employing one of the two strategies. These are concerned with the role of deductive and inductive reasoning.

In variable-based models, empirical structures are represented by functional relations between variables. These are usually stated as scientific laws (Carnap, 1928 ). Formally, these laws correspond to logical expressions of the form

In plain text, this means that y is a function of x for all objects i in the relational structure under consideration. For example, in the above example, one may formulate the following law: for all students in the classroom it holds that “distraction” is a monotone decreasing function of “engagement.” Such a law can be used to derive predictions for single individuals by means of logical deduction: if the above law applies to all students in the classroom, it is possible to calculate the expected distraction from a student's engagement. An empirical observation can now be evaluated against this prediction. If the prediction turns out to be false, the law can be refuted based on the principle of falsification (Popper, 1935 ). If a scientific law repeatedly withstands such empirical tests, it may be considered to be valid with regard to the relational structure under consideration.

In case-based models, there are no laws about a population, because the model does not abstract from the cases but from the observed behaviors. A case-based model describes the underlying structure in terms of existential sentences. Formally, this corresponds to a logical expression of the form

In plain text, this means that there is at least one case i for which the condition XYZ holds. For example, the above category system implies that there is at least one active learner. This is a statement about a singular observation. It is impossible to deduce a statement about another person from an existential sentence like this. Therefore, the strategy of falsification cannot be applied to test the model's validity in a specific context. If one wishes to generalize to other cases, this is accomplished by inductive reasoning, instead. If we observed one person that fulfills the criteria of calling him or her an active learner, we can hypothesize that there may be other persons that are identical to the observed case in this respect. However, we do not arrive at this conclusion by logical deduction, but by induction.

Despite this important distinction, it would be wrong to conclude that variable-based models are intrinsically deductive and case-based models are intrinsically inductive. 6 Both types of reasoning apply to both types of models, but on different levels. Based on a person-space, in a variable-based model one can use deduction to derive statements about individual persons from abstract population laws. There is an analogous way of reasoning for case-based models: because they are based on a behavior space, it is possible to deduce statements about singular behaviors. For example, if we know that Peter is an active learner, we can deduce that he takes notes in the classroom. This kind of deductive reasoning can also be applied on a higher level of abstraction to deduce thematic categories from theoretical assumptions (Braun and Clarke, 2006 ). Similarly, there is an analog for inductive generalization from the perspective of variable-based modeling: since the laws are only quantified over the person-space, generalizations to other behaviors rely on inductive reasoning. For example, it is plausible to assume that highly engaged students tend to do their homework properly–however, in our example this behavior has never been observed. Hence, in variable-based models we usually generalize to other behaviors by means of induction. This kind of inductive reasoning is very common when empirical results are generalized from the laboratory to other behavioral domains.

Although inductive and deductive reasoning are used in qualitative and quantitative research, it is important to stress the different roles of induction and deduction when models are applied to cases. A variable-based approach implies to draw conclusions about cases by means of logical deduction; a case-based approach implies to draw conclusions about cases by means of inductive reasoning. In the following, we build on this distinction to differentiate between qualitative (bottom-up) and quantitative (top-down) strategies of generalization.

Generalization and the Problem of Replication

We will now extend the formal analysis of quantitative and qualitative approaches to the question of generalization and replicability of empirical findings. For this sake, we have to introduce some concepts of formal logic. Formal logic is concerned with the validity of arguments. It provides conditions to evaluate whether certain sentences (conclusions) can be derived from other sentences (premises). In this context, a theory is nothing but a set of sentences (also called axioms). Formal logic provides tools to derive new sentences that must be true, given the axioms are true (Smith, 2020 ). These derived sentences are called theorems or, in the context of empirical science, predictions or hypotheses . On the syntactic level, the rules of logic only state how to evaluate the truth of a sentence relative to its premises. Whether or not sentences are actually true, is formally specified by logical semantics.

On the semantic level, formal logic is intrinsically linked to set-theory. For example, a logical statement like “all dogs are mammals,” is true if and only if the set of dogs is a subset of the set of mammals. Similarly, the sentence “all chatting students doodle” is true if and only if the set of chatting students is a subset of the set of doodling students (compare Figure 3 ). Whereas, the first sentence is analytically true due to the way we define the words “dog” and “mammal,” the latter can be either true or false, depending on the relational structure we actually observe. We can thus interpret an empirical relational structure as the truth criterion of a scientific theory. From a logical point of view, this corresponds to the semantics of a theory. As shown above, variable-based and case-based models both give a formal representation of the same kinds of empirical structures. Accordingly, both types of models can be stated as formal theories. In the variable-based approach, this corresponds to a set of scientific laws that are quantified over the members of an abstract population (these are the axioms of the theory). In the case-based approach, this corresponds to a set of abstract existential statements about a specific class of individuals.

In contrast to mathematical axiom systems, empirical theories are usually not considered to be necessarily true. This means that even if we find no evidence against a theory, it is still possible that it is actually wrong. We may know that a theory is valid in some contexts, yet it may fail when applied to a new set of behaviors (e.g., if we use a different instrumentation to measure a variable) or a new population (e.g., if we draw a new sample).

From a logical perspective, the possibility that a theory may turn out to be false stems from the problem of contingency . A statement is contingent, if it is both, possibly true and possibly false. Formally, we introduce two modal operators: □ to designate logical necessity, and ◇ to designate logical possibility. Semantically, these operators are very similar to the existential quantifier, ∃, and the universal quantifier, ∀. Whereas ∃ and ∀ refer to the individual objects within one relational structure, the modal operators □ and ◇ range over so-called possible worlds : a statement is possibly true, if and only if it is true in at least one accessible possible world, and a statement is necessarily true if and only if it is true in every accessible possible world (Hughes and Cresswell, 1996 ). Logically, possible worlds are mathematical abstractions, each consisting of a relational structure. Taken together, the relational structures of all accessible possible worlds constitute the formal semantics of necessity, possibility and contingency. 7

In the context of an empirical theory, each possible world may be identified with an empirical relational structure like the above classroom example. Given the set of intended applications of a theory (the scope of the theory, one may say), we can now construct possible world semantics for an empirical theory: each intended application of the theory corresponds to a possible world. For example, a quantified sentence like “all chatting students doodle” may be true in one classroom and false in another one. In terms of possible worlds, this would correspond to a statement of contingency: “it is possible that all chatting students doodle in one classroom, and it is possible that they don't in another classroom.” Note that in the above expression, “all students” refers to the students in only one possible world, whereas “it is possible” refers to the fact that there is at least one possible world for each of the specified cases.

To apply these possible world semantics to quantitative research, let us reconsider how generalization to other cases works in variable-based models. Due to the syntactic structure of quantitative laws, we can deduce predictions for singular observations from an expression of the form ∀ i : y i = f ( x i ). Formally, the logical quantifier ∀ ranges only over the objects of the corresponding empirical relational structure (in our example this would refer to the students in the observed classroom). But what if we want to generalize beyond the empirical structure we actually observed? The standard procedure is to assume an infinitely large, abstract population from which a random sample is drawn. Given the truth of the theory, we can deduce predictions about what we may observe in the sample. Since usually we deal with probabilistic models, we can evaluate our theory by means of the conditional probability of the observations, given the theory holds. This concept of conditional probability is the foundation of statistical significance tests (Hogg et al., 2013 ), as well as Bayesian estimation (Watanabe, 2018 ). In terms of possible world semantics, the random sampling model implies that all possible worlds (i.e., all intended applications) can be conceived as empirical sub-structures from a greater population structure. For example, the empirical relational structure constituted by the observed behaviors in a classroom would be conceived as a sub-matrix of the population person × behavior matrix. It follows that, if a scientific law is true in the population, it will be true in all possible worlds, i.e., it will be necessarily true. Formally, this corresponds to an expression of the form

The statistical generalization model thus constitutes a top-down strategy for dealing with individual contexts that is analogous to the way variable-based models are applied to individual cases (compare Table 1 ). Consequently, if we apply a variable-based model to a new context and find out that it does not fit the data (i.e., there is a statistically significant deviation from the model predictions), we have reason to doubt the validity of the theory. This is what makes the problem of low replicability so important: we observe that the predictions are wrong in a new study; and because we apply a top-down strategy of generalization to contexts beyond the ones we observed, we see our whole theory at stake.

Qualitative research, on the contrary, follows a different strategy of generalization. Since case-based models are formulated by a set of context-specific existential sentences, there is no need for universal truth or necessity. In contrast to statistical generalization to other cases by means of random sampling from an abstract population, the usual strategy in case-based modeling is to employ a bottom-up strategy of generalization that is analogous to the way case-based models are applied to individual cases. Formally, this may be expressed by stating that the observed qualia exist in at least one possible world, i.e., the theory is possibly true:

This statement is analogous to the way we apply case-based models to individual cases (compare Table 1 ). Consequently, the set of intended applications of the theory does not follow from a sampling model, but from theoretical assumptions about which cases may be similar to the observed cases with respect to certain relevant characteristics. For example, if we observe that certain behaviors occur together in one classroom, following a bottom-up strategy of generalization, we will hypothesize why this might be the case. If we do not replicate this finding in another context, this does not question the model itself, since it was a context-specific theory all along. Instead, we will revise our hypothetical assumptions about why the new context is apparently less similar to the first one than we originally thought. Therefore, if an empirical finding does not replicate, we are more concerned about our understanding of the cases than about the validity of our theory.

Whereas statistical generalization provides us with a formal (and thus somehow more objective) apparatus to evaluate the universal validity of our theories, the bottom-up strategy forces us to think about the class of intended applications on theoretical grounds. This means that we have to ask: what are the boundary conditions of our theory? In the above classroom example, following a bottom-up strategy, we would build on our preliminary understanding of the cases in one context (e.g., a public school) to search for similar and contrasting cases in other contexts (e.g., a private school). We would then re-evaluate our theoretical description of the data and explore what makes cases similar or dissimilar with regard to our theory. This enables us to expand the class of intended applications alongside with the theory.

Of course, none of these strategies is superior per se . Nevertheless, they rely on different assumptions and may thus be more or less adequate in different contexts. The statistical strategy relies on the assumption of a universal population and invariant measurements. This means, we assume that (a) all samples are drawn from the same population and (b) all variables refer to the same behavioral classes. If these assumptions are true, statistical generalization is valid and therefore provides a valuable tool for the testing of empirical theories. The bottom-up strategy of generalization relies on the idea that contexts may be classified as being more or less similar based on characteristics that are not part of the model being evaluated. If such a similarity relation across contexts is feasible, the bottom-up strategy is valid, as well. Depending on the strategy of generalization, replication of empirical research serves two very different purposes. Following the (top-down) principle of generalization by deduction from scientific laws, replications are empirical tests of the theory itself, and failed replications question the theory on a fundamental level. Following the (bottom-up) principle of generalization by induction to similar contexts, replications are a means to explore the boundary conditions of a theory. Consequently, failed replications question the scope of the theory and help to shape the set of intended applications.

We have argued that quantitative and qualitative research are best understood by means of the structure of the employed models. Quantitative science mainly relies on variable-based models and usually employs a top-down strategy of generalization from an abstract population to individual cases. Qualitative science prefers case-based models and usually employs a bottom-up strategy of generalization. We further showed that failed replications have very different implications depending on the underlying strategy of generalization. Whereas in the top-down strategy, replications are used to test the universal validity of a model, in the bottom-up strategy, replications are used to explore the scope of a model. We will now address the implications of this analysis for psychological research with regard to the problem of replicability.

Modern day psychology almost exclusively follows a top-down strategy of generalization. Given the quantitative background of most psychological theories, this is hardly surprising. Following the general structure of variable-based models, the individual case is not the focus of the analysis. Instead, scientific laws are stated on the level of an abstract population. Therefore, when applying the theory to a new context, a statistical sampling model seems to be the natural consequence. However, this is not the only possible strategy. From a logical point of view, there is no reason to assume that a quantitative law like ∀ i : y i = f ( x i ) implies that the law is necessarily true, i.e.,: □(∀ i : y i = f ( x i )). Instead, one might just as well define the scope of the theory following an inductive strategy. 8 Formally, this would correspond to the assumption that the observed law is possibly true, i.e.,: ◇(∀ i : y i = f ( x i )). For example, we may discover a functional relation between “engagement” and “distraction” without referring to an abstract universal population of students. Instead, we may hypothesize under which conditions this functional relation may be valid and use these assumptions to inductively generalize to other cases.

If we take this seriously, this would require us to specify the intended applications of the theory: in which contexts do we expect the theory to hold? Or, equivalently, what are the boundary conditions of the theory? These boundary conditions may be specified either intensionally, i.e., by giving external criteria for contexts being similar enough to the ones already studied to expect a successful application of the theory. Or they may be specified extensionally, by enumerating the contexts where the theory has already been shown to be valid. These boundary conditions need not be restricted to the population we refer to, but include all kinds of contextual factors. Therefore, adopting a bottom-up strategy, we are forced to think about these factors and make them an integral part of our theories.

In fact, there is good reason to believe that bottom-up generalization may be more adequate in many psychological studies. Apart from the pitfalls associated with statistical generalization that have been extensively discussed in recent years (e.g., p-hacking, underpowered studies, publication bias), it is worth reflecting on whether the underlying assumptions are met in a particular context. For example, many samples used in experimental psychology are not randomly drawn from a large population, but are convenience samples. If we use statistical models with non-random samples, we have to assume that the observations vary as if drawn from a random sample. This may indeed be the case for randomized experiments, because all variation between the experimental conditions apart from the independent variable will be random due to the randomization procedure. In this case, a classical significance test may be regarded as an approximation to a randomization test (Edgington and Onghena, 2007 ). However, if we interpret a significance test as an approximate randomization test, we test not for generalization but for internal validity. Hence, even if we use statistical significance tests when assumptions about random sampling are violated, we still have to use a different strategy of generalization. This issue has been discussed in the context of small-N studies, where variable-based models are applied to very small samples, sometimes consisting of only one individual (Dugard et al., 2012 ). The bottom-up strategy of generalization that is employed by qualitative researchers, provides such an alternative.

Another important issue in this context is the question of measurement invariance. If we construct a variable-based model in one context, the variables refer to those behaviors that constitute the underlying empirical relational structure. For example, we may construct an abstract measure of “distraction” using the observed behaviors in a certain context. We will then use the term “distraction” as a theoretical term referring to the variable we have just constructed to represent the underlying empirical relational structure. Let us now imagine we apply this theory to a new context. Even if the individuals in our new context are part of the same population, we may still get into trouble if the observed behaviors differ from those used in the original study. How do we know whether these behaviors constitute the same variable? We have to ensure that in any new context, our measures are valid for the variables in our theory. Without a proper measurement model, this will be hard to achieve (Buntins et al., 2017 ). Again, we are faced with the necessity to think of the boundary conditions of our theories. In which contexts (i.e., for which sets of individuals and behaviors) do we expect our theory to work?

If we follow the rationale of inductive generalization, we can explore the boundary conditions of a theory with every new empirical study. We thus widen the scope of our theory by comparing successful applications in different contexts and unsuccessful applications in similar contexts. This may ultimately lead to a more general theory, maybe even one of universal scope. However, unless we have such a general theory, we might be better off, if we treat unsuccessful replications not as a sign of failure, but as a chance to learn.

Author Contributions

MB conceived the original idea and wrote the first draft of the paper. MS helped to further elaborate and scrutinize the arguments. All authors contributed to the final version of the manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to thank Annette Scheunpflug for helpful comments on an earlier version of the manuscript.

1 A person × behavior matrix constitutes a very simple relational structure that is common in psychological research. This is why it is chosen here as a minimal example. However, more complex structures are possible, e.g., by relating individuals to behaviors over time, with individuals nested within groups etc. For a systematic overview, compare Coombs ( 1964 ).

2 This notion of empirical content applies only to deterministic models. The empirical content of a probabilistic model consists in the probability distribution over all possible empirical structures.

3 For example, neither the SAGE Handbook of qualitative data analysis edited by Flick ( 2014 ) nor the Oxford Handbook of Qualitative Research edited by Leavy ( 2014 ) mention formal approaches to category formation.

4 Note also that the described structure is empirically richer than a nominal scale. Therefore, a reduction of qualitative category formation to be a special (and somehow trivial) kind of measurement is not adequate.

5 It is possible to extend this notion of empirical content to the probabilistic case (this would correspond to applying a latent class analysis). But, since qualitative research usually does not rely on formal algorithms (neither deterministic nor probabilistic), there is currently little practical use of such a concept.

6 We do not elaborate on abductive reasoning here, since, given an empirical relational structure, the concept can be applied to both types of models in the same way (Schurz, 2008 ). One could argue that the underlying relational structure is not given a priori but has to be constructed by the researcher and will itself be influenced by theoretical expectations. Therefore, abductive reasoning may be necessary to establish an empirical relational structure in the first place.

7 We shall not elaborate on the metaphysical meaning of possible worlds here, since we are only concerned with empirical theories [but see Tooley ( 1999 ), for an overview].

8 Of course, this also means that it would be equally reasonable to employ a top-down strategy of generalization using a case-based model by postulating that □(∃ i : XYZ i ). The implications for case-based models are certainly worth exploring, but lie beyond the scope of this article.

  • Agresti A. (2013). Categorical Data Analysis, 3rd Edn. Wiley Series In Probability And Statistics . Hoboken, NJ: Wiley. [ Google Scholar ]
  • Borsboom D. (2005). Measuring the Mind: Conceptual Issues in Contemporary Psychometrics . Cambridge: Cambridge University Press; 10.1017/CBO9780511490026 [ CrossRef ] [ Google Scholar ]
  • Braun V., Clarke V. (2006). Using thematic analysis in psychology . Qual. Res. Psychol . 3 , 77–101. 10.1191/1478088706qp063oa [ CrossRef ] [ Google Scholar ]
  • Buntins M., Buntins K., Eggert F. (2017). Clarifying the concept of validity: from measurement to everyday language . Theory Psychol. 27 , 703–710. 10.1177/0959354317702256 [ CrossRef ] [ Google Scholar ]
  • Carnap R. (1928). The Logical Structure of the World . Berkeley, CA: University of California Press. [ Google Scholar ]
  • Coombs C. H. (1964). A Theory of Data . New York, NY: Wiley. [ Google Scholar ]
  • Creswell J. W. (2015). A Concise Introduction to Mixed Methods Research . Los Angeles, CA: Sage. [ Google Scholar ]
  • Dugard P., File P., Todman J. B. (2012). Single-Case and Small-N Experimental Designs: A Practical Guide to Randomization Tests 2nd Edn . New York, NY: Routledge; 10.4324/9780203180938 [ CrossRef ] [ Google Scholar ]
  • Edgington E., Onghena P. (2007). Randomization Tests, 4th Edn. Statistics. Hoboken, NJ: CRC Press; 10.1201/9781420011814 [ CrossRef ] [ Google Scholar ]
  • Everett J. A. C., Earp B. D. (2015). A tragedy of the (academic) commons: interpreting the replication crisis in psychology as a social dilemma for early-career researchers . Front. Psychol . 6 :1152. 10.3389/fpsyg.2015.01152 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Flick U. (Ed.). (2014). The Sage Handbook of Qualitative Data Analysis . London: Sage; 10.4135/9781446282243 [ CrossRef ] [ Google Scholar ]
  • Freeman M., Demarrais K., Preissle J., Roulston K., St. Pierre E. A. (2007). Standards of evidence in qualitative research: an incitement to discourse . Educ. Res. 36 , 25–32. 10.3102/0013189X06298009 [ CrossRef ] [ Google Scholar ]
  • Ganter B. (2010). Two basic algorithms in concept analysis , in Lecture Notes In Computer Science. Formal Concept Analysis, Vol. 5986 , eds Hutchison D., Kanade T., Kittler J., Kleinberg J. M., Mattern F., Mitchell J. C., et al. (Berlin, Heidelberg: Springer Berlin Heidelberg; ), 312–340. 10.1007/978-3-642-11928-6_22 [ CrossRef ] [ Google Scholar ]
  • Ganter B., Wille R. (1999). Formal Concept Analysis . Berlin, Heidelberg: Springer Berlin Heidelberg; 10.1007/978-3-642-59830-2 [ CrossRef ] [ Google Scholar ]
  • Guttman L. (1944). A basis for scaling qualitative data . Am. Sociol. Rev . 9 :139 10.2307/2086306 [ CrossRef ] [ Google Scholar ]
  • Hogg R. V., Mckean J. W., Craig A. T. (2013). Introduction to Mathematical Statistics, 7th Edn . Boston, MA: Pearson. [ Google Scholar ]
  • Hughes G. E., Cresswell M. J. (1996). A New Introduction To Modal Logic . London; New York, NY: Routledge; 10.4324/9780203290644 [ CrossRef ] [ Google Scholar ]
  • Klein R. A., Ratliff K. A., Vianello M., Adams R. B., Bahník Š., Bernstein M. J., et al. (2014). Investigating variation in replicability . Soc. Psychol. 45 , 142–152. 10.1027/1864-9335/a000178 [ CrossRef ] [ Google Scholar ]
  • Krantz D. H., Luce D., Suppes P., Tversky A. (1971). Foundations of Measurement Volume I: Additive And Polynomial Representations . New York, NY; London: Academic Press; 10.1016/B978-0-12-425401-5.50011-8 [ CrossRef ] [ Google Scholar ]
  • Leavy P. (2014). The Oxford Handbook of Qualitative Research . New York, NY: Oxford University Press; 10.1093/oxfordhb/9780199811755.001.0001 [ CrossRef ] [ Google Scholar ]
  • Maxwell S. E., Lau M. Y., Howard G. S. (2015). Is psychology suffering from a replication crisis? what does “failure to replicate” really mean? Am. Psychol. 70 , 487–498. 10.1037/a0039400 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Miles M. B., Huberman A. M., Saldaña J. (2014). Qualitative Data Analysis: A Methods Sourcebook, 3rd Edn . Los Angeles, CA; London; New Delhi; Singapore; Washington, DC: Sage. [ Google Scholar ]
  • Open Science, Collaboration (2015). Estimating the reproducibility of psychological science . Science 349 :Aac4716. 10.1126/science.aac4716 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Popper K. (1935). Logik Der Forschung . Wien: Springer; 10.1007/978-3-7091-4177-9 [ CrossRef ] [ Google Scholar ]
  • Ragin C. (1987). The Comparative Method : Moving Beyond Qualitative and Quantitative Strategies . Berkeley, CA: University Of California Press. [ Google Scholar ]
  • Rihoux B., Ragin C. (2009). Configurational Comparative Methods: Qualitative Comparative Analysis (Qca) And Related Techniques . Thousand Oaks, CA: Sage Publications, Inc; 10.4135/9781452226569 [ CrossRef ] [ Google Scholar ]
  • Scheunpflug A., Krogull S., Franz J. (2016). Understanding learning in world society: qualitative reconstructive research in global learning and learning for sustainability . Int. Journal Dev. Educ. Glob. Learn. 7 , 6–23. 10.18546/IJDEGL.07.3.02 [ CrossRef ] [ Google Scholar ]
  • Schurz G. (2008). Patterns of abduction . Synthese 164 , 201–234. 10.1007/s11229-007-9223-4 [ CrossRef ] [ Google Scholar ]
  • Shrout P. E., Rodgers J. L. (2018). Psychology, science, and knowledge construction: broadening perspectives from the replication crisis . Annu. Rev. Psychol . 69 , 487–510. 10.1146/annurev-psych-122216-011845 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith P. (2020). An Introduction To Formal Logic . Cambridge: Cambridge University Press. 10.1017/9781108328999 [ CrossRef ] [ Google Scholar ]
  • Suppes P., Krantz D. H., Luce D., Tversky A. (1971). Foundations of Measurement Volume II: Geometrical, Threshold, and Probabilistic Representations . New York, NY; London: Academic Press. [ Google Scholar ]
  • Tooley M. (Ed.). (1999). Necessity and Possibility. The Metaphysics of Modality . New York, NY; London: Garland Publishing. [ Google Scholar ]
  • Trafimow D. (2018). An a priori solution to the replication crisis . Philos. Psychol . 31 , 1188–1214. 10.1080/09515089.2018.1490707 [ CrossRef ] [ Google Scholar ]
  • Watanabe S. (2018). Mathematical Foundations of Bayesian Statistics. CRC Monographs On Statistics And Applied Probability . Boca Raton, FL: Chapman And Hall. [ Google Scholar ]
  • Wiggins B. J., Chrisopherson C. D. (2019). The replication crisis in psychology: an overview for theoretical and philosophical psychology . J. Theor. Philos. Psychol. 39 , 202–217. 10.1037/teo0000137 [ CrossRef ] [ Google Scholar ]
  • Open access
  • Published: 19 February 2024

Sustaining the collaborative chronic care model in outpatient mental health: a matrixed multiple case study

  • Bo Kim 1 , 2 ,
  • Jennifer L. Sullivan 3 , 4 ,
  • Madisen E. Brown 1 ,
  • Samantha L. Connolly 1 , 2 ,
  • Elizabeth G. Spitzer 1 , 5 ,
  • Hannah M. Bailey 1 ,
  • Lauren M. Sippel 6 , 7 ,
  • Kendra Weaver 8 &
  • Christopher J. Miller 1 , 2  

Implementation Science volume  19 , Article number:  16 ( 2024 ) Cite this article

34 Accesses

1 Altmetric

Metrics details

Sustaining evidence-based practices (EBPs) is crucial to ensuring care quality and addressing health disparities. Approaches to identifying factors related to sustainability are critically needed. One such approach is Matrixed Multiple Case Study (MMCS), which identifies factors and their combinations that influence implementation. We applied MMCS to identify factors related to the sustainability of the evidence-based Collaborative Chronic Care Model (CCM) at nine Department of Veterans Affairs (VA) outpatient mental health clinics, 3–4 years after implementation support had concluded.

We conducted a directed content analysis of 30 provider interviews, using 6 CCM elements and 4 Integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) domains as codes. Based on CCM code summaries, we designated each site as high/medium/low sustainability. We used i-PARIHS code summaries to identify relevant factors for each site, the extent of their presence, and the type of influence they had on sustainability (enabling/neutral/hindering/unclear). We organized these data into a sortable matrix and assessed sustainability-related cross-site trends.

CCM sustainability status was distributed among the sites, with three sites each being high, medium, and low. Twenty-five factors were identified from the i-PARIHS code summaries, of which 3 exhibited strong trends by sustainability status (relevant i-PARIHS domain in square brackets): “Collaborativeness/Teamwork [Recipients],” “Staff/Leadership turnover [Recipients],” and “Having a consistent/strong internal facilitator [Facilitation]” during and after active implementation. At most high-sustainability sites only, (i) “Having a knowledgeable/helpful external facilitator [Facilitation]” was variably present and enabled sustainability when present, while (ii) “Clarity about what CCM comprises [Innovation],” “Interdisciplinary coordination [Recipients],” and “Adequate clinic space for CCM team members [Context]” were somewhat or less present with mixed influences on sustainability.

Conclusions

MMCS revealed that CCM sustainability in VA outpatient mental health clinics may be related most strongly to provider collaboration, knowledge retention during staff/leadership transitions, and availability of skilled internal facilitators. These findings have informed a subsequent CCM implementation trial that prospectively examines whether enhancing the above-mentioned factors within implementation facilitation improves sustainability. MMCS is a systematic approach to multi-site examination that can be used to investigate sustainability-related factors applicable to other EBPs and across multiple contexts.

Peer Review reports

Contributions to the literature

We examined the ways in which the sustainability of the evidence-based Collaborative Chronic Care Model differed across nine outpatient mental health clinics where it was implemented.

This work demonstrates a unique application of the Matrixed Multiple Case Study (MMCS) method, originally developed to identify factors and their combinations that influence implementation, to investigate the long-term sustainability of a previously implemented evidence-based practice.

Contextual influences on sustainability identified through this work, as well as the systematic approach to multi-site examination offered by MMCS, can inform future efforts to sustainably implement and methodically evaluate an evidence-based practice’s uptake and continued use in routine care.

The sustainability of evidence-based practices (EBPs) over time is crucial to maximize the public health impact of EBPs implemented into routine care. Implementation evaluators focus on sustainability as a central implementation outcome, and funders of implementation efforts seek sustained long-term returns on their investment. Furthermore, practitioners and leadership at implementation sites face the task of sustaining an EBP’s usage even after implementation funding, support, and associated evaluation efforts conclude. The circumstances and influences contributing to EBP sustainability are therefore of high interest to the field of implementation science.

Sustainability depends on the specific EBP being implemented, the individuals undergoing the implementation, the contexts in which the implementation takes place, and the facilitation of (i.e., support for) the implementation. Hence, universal conditions that invariably lead to sustainability are challenging to establish. Even if a set of conditions could be identified as being associated with high sustainability “on average,” its usefulness is questionable when most real-world implementation contexts may deviate from “average” on key implementation-relevant metrics.

Thus, when seeking a better understanding of EBP sustainability, there is a critical need for methods that examine the ways in which sustainability varies in diverse contexts. One such method is Matrixed Multiple Case Study (MMCS) [ 1 ], which is beginning to be applied in implementation research to identify factors related to implementation [ 2 , 3 , 4 , 5 ]. MMCS capitalizes on the many contextual variations and heterogeneous outcomes that are expected when an EBP is implemented across multiple sites. Specifically, MMCS provides a formalized sequence of steps for cross-site analysis by arranging data into an array of matrices, which are sorted and filtered to test for expected factors and identify less expected factors influencing an implementation outcome of interest.

Although the MMCS represents a promising method for systematically exploring the “black box” of the ways in which implementation is more or less successful, it has not yet been applied to investigate the long-term sustainability of implemented EBPs. Therefore, we applied MMCS to identify factors related to the sustainability of the evidence-based Collaborative Chronic Care Model (CCM), previously implemented using implementation facilitation [ 6 , 7 , 8 ], at nine VA medical centers’ outpatient general mental health clinics. An earlier interview-based investigation of CCM provider perspectives had identified key determinants of CCM sustainability at the sites, yet characteristics related to the ways in which CCM sustainability differed at the sites are still not well understood. For this reason, our objective was to apply MMCS to examine the interview data to determine factors associated with CCM sustainability at each site.

Clinical and implementation contexts

CCM-based care aims to ensure that patients are treated in a coordinated, patient-centered, and anticipatory manner. This project’s nine outpatient general mental health clinics had participated in a hybrid CCM effectiveness-implementation trial 3 to 4 years prior, which had resulted in improved clinical outcomes that were not universally maintained post-implementation (i.e., after implementation funding and associated evaluation efforts concluded) [ 7 , 9 ]. This lack of aggregate sustainability across the nine clinics is what prompted the earlier interview-based investigation of CCM provider perspectives that identified key determinants of CCM sustainability at the trial sites [ 10 ].

These prior works were conducted in VA outpatient mental health teams, known as Behavioral Health Interdisciplinary Program (BHIP) teams. While there was variability in the exact composition of each BHIP team, all teams consisted of a multidisciplinary set of frontline clinicians (e.g., psychiatrists, psychologists, social workers, nurses) and support staff, serving a panel of about 1000 patients each.

This current project applied MMCS to examine the data from the earlier interviews [ 10 ] for the ways in which CCM sustainability differed at the sites and the factors related to sustainability. The project was determined to be non-research by the VA Boston Research and Development Service, and therefore did not require oversight by the Institutional Review Board (IRB). Details regarding the procedures undertaken for the completed hybrid CCM effectiveness-implementation trial, which serves as the context for this project, have been previously published [ 6 , 7 ]. Similarly, details regarding data collection for the follow-up provider interviews have also been previously published [ 10 ]. We provide a brief overview of the steps that we took for data collection and describe the steps that we took for applying MMCS to analyze the interview data. Additional file  1 outlines our use of the Consolidated Criteria for Reporting Qualitative Research (COREQ) Checklist [ 11 ].

Data collection

We recruited 30 outpatient mental health providers across the nine sites that had participated in the CCM implementation trial, including a multidisciplinary mix of mental health leaders and frontline staff. We recruited participants via email, and we obtained verbal informed consent from all participants. Each interview lasted between 30 and 60 min and focused on the degree to which the participant perceived care processes to have remained aligned to the CCM’s six core elements: work role redesign, patient self-management support, provider decision support, clinical information systems, linkages to community resources, and organizational/leadership support [ 12 , 13 , 14 ]. Interview questions also inquired about the participant’s perceived barriers and enablers influencing CCM sustainability, as well as about the latest status of CCM-based care practices. Interviews were digitally recorded and professionally transcribed. Additional details regarding data collection have been previously published [ 10 ].

Data analysis

We applied MMCS’ nine analytical steps [ 1 ] to the interview data. Each step described below was led by one designated member of the project team, with subsequent review by all project team members to reach a consensus on the examination conducted for each step.

We established the evaluation goal (step 1) to identify the ways in which sustainability differed across the sites and the factors related to sustainability, defining sustainability (step 2) as the continued existence of CCM-aligned care practices—namely, that care processes remained aligned with the six core CCM elements. Table  1 shows examples of care processes that align with each CCM element. As our prior works directly leading up to this project (i.e., design and evaluation of the CCM implementation trial that involved the very sites included in this project [ 6 , 15 , 16 ]) were guided by the Integrated Promoting Action on Research Implementation in Health Services (i-PARIHS) framework [ 17 ] and i-PARIHS positions facilitation (the implementation strategy that our trial was testing) as the core ingredient that drives implementation [ 17 ], we selected i-PARIHS’ four domains—innovation, recipients, context, and facilitation—as relevant domains under which to examine factors influencing sustainability (step 3). i-PARIHS posits that the successful implementation of an innovation and its sustained use by recipients in a context is enabled by facilitation (both the individuals doing the facilitation and the process used for facilitation). We examined the data on both sustainability and potentially relevant i-PARIHS domains (step 4) by conducting directed content analysis [ 18 ] of the recorded and professionally transcribed interview data. We used the six CCM elements and the four i-PARIHS domains as a priori codes.

Additional file  2 provides an overview of data input, tasks performed, and analysis output for MMCS steps 5 through 9 described below. We assessed sustainability per site (step 5) by generating CCM code summaries per site, and reached a consensus on whether each site exhibited high, medium, or low sustainability relative to other sites based on the summary data. We assigned a higher sustainability level for sites that exhibited more CCM-aligned care processes, had more participants consistently mention those processes, and considered those processes more as “just the way things are done” at the site. Namely, (i) high sustainability sites had concrete examples of CCM-aligned care processes (such as the ones shown in Table  1 ) for many of the six CCM elements, which multiple participants mentioned as central to how they deliver care, (ii) low sustainability sites had only a few concrete examples of CCM-aligned care processes, mentioned by only a small subset of participants and/or inconsistently practiced, and (iii) medium sustainability sites matched neither of the high nor low sustainability cases, having several concrete examples of CCM-aligned care process for some of the CCM elements, varying in whether they are mentioned by multiple participants or how consistently they are a part of delivering care. For the CCM code summaries per site, one project team member initially reviewed the coded data to draft the summaries including exemplar quotes. Each summary and relevant exemplar quotes were then reviewed by and refined with input from all six project team members during recurring team meetings to finalize the high, medium, or low sustainability designation to use in the subsequent MMCS steps. Reviewing and refining the summaries for the nine sites took approximately four 60-min meetings of the six project team members, with each site’s CCM code summary taking approximately 20–35 min to discuss and reach consensus on. We referred to lists of specific examples of how the six core CCM elements were operationalized in our CCM implementation trial [ 19 , 20 ]. Refinements occurred mostly around familiarizing the newer members of the project team (i.e., those who had not participated in our prior CCM-related work) with the examples and definitions. We aligned to established qualitative analysis methods for consensus-reaching discussions [ 18 , 21 ]. Recognizing the common challenge faced by such discussions in adequately accounting for everyone’s interpretations of the data [ 22 ], we drew on Bens’ meeting facilitation techniques [ 23 ] that include setting ground rules, ensuring balanced participation from all project team members, and accurately recording decisions and action items.

We then identified influencing factors per site (step 6), by generating i-PARIHS code summaries per site and identifying distinct factors under each domain of i-PARIHS (e.g., Collaborativeness and teamwork as a factor under the Recipients domain). For the i-PARIHS code summaries per site, one project team member initially reviewed the coded data to draft the summaries including exemplar quotes. They elaborated on each i-PARIHS domain-specific summary by noting distinct factors that they deemed relevant to the summary, proposing descriptive wording to refer to each factor (e.g., “team members share a commitment to their patients” under the Recipients domain). Each summary, associated factor descriptions, and relevant exemplar quotes were then reviewed and refined with input from all six project team members during recurring team meetings to finalize the relevant factors to use in the subsequent MMCS steps. Finalizing the factors included deciding which similar proposed factor descriptions from different sites to consolidate into one factor and which wording to use to refer to the consolidated factor (e.g., “team members share a commitment to their patients,” “team members collaborate well,” and “team members know each other’s styles and what to expect” were consolidated into the Collaborativeness and teamwork factor under the Recipients domain). It took approximately four 60-min meetings of the six project team members to review and refine the summaries and factors for the nine sites, with each site’s i-PARIHS code summary and factors taking approximately 20–35 min to discuss and reach consensus on. We referred to lists of explicit definitions of i-PARIHS constructs that our team members had previously developed and published [ 16 , 24 ]. We once again aligned to established qualitative analysis methods for consensus-reaching discussions [ 18 , 21 ], drawing on Bens’ meeting facilitation techniques [ 23 ] to adequately account for everyone’s interpretations of the data [ 22 ].

We organized the examined data (i.e., the assessed sustainability and identified factors per site) into a sortable matrix (step 7) using Microsoft Excel [ 25 ], laid out by influencing factor (row), sustainability (column), and site (sheet). We conducted within-site analysis of the matrixed data (step 8), examining the data on each influencing factor and designating whether the factor (i) was present, somewhat present, or minimally present [based on aggregate reports from the site’s participants; used “minimally present” when, considering all available data from a site regarding a factor, the factor was predominantly weak (e.g., predominantly weak Ability to continue patient care during COVID at a medium sustainability site); used “somewhat present” when, considering all available data from a site regarding a factor, the factor was neither predominantly strong nor predominantly weak (e.g., neither predominantly strong nor predominantly weak Collaborativeness and teamwork at a low sustainability site)], and (ii) had an enabling, hindering, or neutral/unclear influence on sustainability (designated as “neutral” when, considering all available data from a site regarding a factor, the factor had neither a predominantly enabling nor a predominantly hindering influence on sustainability). These designations of factors’ presence and influence are conceptually representative of what is commonly referred to as magnitude and valence, respectively, by other efforts that construct scoring for qualitative data (e.g., [ 26 , 27 ]). Like the team-based consensus approach of earlier MMCS steps, factors’ presence and type of influence per site were initially proposed by one project team member after reviewing the matrix’s site-specific data, then refined with input from all project team members during recurring team meetings that reviewed the matrix. Accordingly, similar to the earlier MMCS steps, we aligned to established qualitative methods [ 18 , 21 ] and meeting facilitation techniques [ 23 ] for these consensus-reaching discussions.

We then conducted a cross-site analysis of the matrixed data (step 9), assessing whether factors and their combinations were (i) present across multiple sites, (ii) consistently associated with higher or lower sustainability, and (iii) emphasized at some sites more than others. We noted that any factor may have not come up during interviews with a site because either it is not pertinent or it is pertinent but still did not come up, although we asked an open-ended question at the end of each interview about whether there was anything else that the participant wanted to share regarding sustainability. To adequately account for these possibilities, we decided as a team to regard a factor or a combination of factors as being associated with high/medium/low sustainability if it was identified at a majority (i.e., even if not all) of the sites designated as high/medium/low sustainability (e.g., if the Collaborativeness and teamwork factor is identified at a majority, even if not all, of the high sustainability sites, we would find it to be associated with high sustainability). Like the team-based consensus approach of earlier MMCS steps, cross-site patterns were initially proposed by one project team member after reviewing the matrix’s cross-site data, then refined with input from all project team members during recurring team meetings that reviewed the matrix. Accordingly, similar to the earlier MMCS steps, we aligned to established qualitative methods [ 18 , 21 ] and meeting facilitation techniques [ 23 ] for these consensus-reaching discussions. We acknowledged the potential existence of additional factors influencing sustainability that may not have emerged during our interviews and also may vary substantially between sites. For example, adaptation of the CCM, characteristics of the patient population, and availability of continued funding, which are factors that extant literature reports as being relevant to sustainability [ 28 , 29 ], were not seen in our interview data. To maintain our analytic focus on the factors seen in our data, we did not add these factors to our analysis.

For the nine sites included in this project, we found the degree of CCM sustainability to be split evenly across the sites—three high-, three medium-, and three low-sustainability. Twenty-five total influencing factors were identified under the i-PARIHS domains of Innovation (6), Recipients (6), Context (8), and Facilitation (5). Table  2 shows these identified influencing factors by domain. Figure  1 shows 11 influencing factors that were identified for at least two sites within a group of high/medium/low sustainability sites—e.g., the factor “consistent and strong internal facilitator” is shown as being present at high sustainability sites with an enabling influence on sustainability, because it was identified as such at two or more of the high sustainability sites. Of these 11 influencing factors, four were identified only for sites with high CCM sustainability and two were identified only for sites with medium or low CCM sustainability.

figure 1

Influencing factors that were identified for at least two sites within a group of high/medium/low sustainability sites

Key trends in influencing factors associated with high, medium, and/or low CCM sustainability

Three factors across two i-PARIHS domains exhibited strong trends by sustainability status. They were the Collaborativeness and teamwork and Turnover of clinic staff and leadership factors under the Recipients domain, and the Having a consistent and strong internal facilitator factor under the Facilitation domain.

Recipients-related factors

Collaborativeness and teamwork was present with an enabling influence on CCM sustainability at most high and medium sustainability sites, while it was only somewhat present with a neutral influence on CCM sustainability at most low sustainability sites. When asked what had made their BHIP team work well, a participant from a high sustainability site said,

“Just a collaborative spirit.” (Participant 604)

A participant from a medium sustainability site said,

“We joke that [the BHIP teams] are even family, that the teams really do function pretty tightly and they each have their own personality.” (Participant 201)

At the low sustainability sites, willingness to work as a team varied across team members; a participant from a low sustainability site said,

“… I think it has to be the commitment of the people who are on the team. So those that are regularly attending, we get a lot more out of it than those that probably don't ever come [to team meetings].” (Participant 904)

Collaborativeness and teamwork of BHIP team members were often perceived as the highlight of pursuing interdisciplinary care.

Turnover of clinic staff and leadership was present with a hindering influence on CCM sustainability at most high, medium, and low sustainability sites.

“We’ve lost a lot of really, really good providers here in the time I’ve been here …,” (Participant 102)

said a participant from a low-sustainability site that had to reconfigure its BHIP teams due to clinic staff shortages. Turnover of mental health clinic leadership made it difficult to maintain CCM practices, especially beyond the teams that participated in the original CCM implementation trial. A participant from a medium sustainability site said,

“Probably about 90 percent of the things that we came up with have fallen by the wayside. Within our team, many of those remain but again, that hand off towards the other teams that I think partly is due to the turnover rate with program managers, supervisors, didn’t get fully implemented.” (Participant 703)

Although turnover was an issue for high sustainability sites as well, there was also indication of the situation improving in recent years; a participant from a high sustainability site said,

“… our attrition rollover rate has dropped quite a bit and I would really attribute that to [the CCM being] more functional and more sustainable and tolerable for the providers.” (Participant 502)

As such, staff and leadership turnover was deemed a major challenge for CCM sustainability for all sites regardless of the overall level of sustainability.

Facilitation-related factor

Having a consistent and strong internal facilitator was present with an enabling influence on CCM sustainability at high sustainability sites, not identified as an influencing factor at most of the medium sustainability sites, and variably present with a hindering, neutral, or unclear influence on CCM sustainability at low sustainability sites. Participants from a high sustainability site perceived that it was important for the internal facilitator to understand different BHIP team members’ personalities and know the clinic’s history. A participant from another high sustainability site shared that, as an internal facilitator themselves, they focused on recognizing and reinforcing the progress of team members:

“… I'm often the person who kind of [starts] off with, ‘Hey, look at what we've done in this location,’ ‘Hey look at what the team's done this month.’” (Participant 402)

A participant from a low sustainability site had also served as an internal facilitator and recounted the difficulty and importance of readying the BHIP team to function in the long run without their assistance:

“I should have been able to get out sooner, I think, to get it to have them running this themselves. And that was just a really difficult process.” (Participant 301)

Participants, especially from the high and low sustainability sites, attributed their BHIP teams’ successes and challenges to the skills of the internal facilitator.

Influencing factors identified only for sites with high CCM sustainability

Four factors across four i-PARIHS domains were identified for high sustainability sites and not for medium or low sustainability sites. They were the factors Details about the CCM being well understood (Innovation domain), Interdisciplinary coordination (Recipients domain), Having adequate clinic space for CCM team members (Context domain), and Having a knowledgeable and helpful external facilitator (Facilitation domain).

Innovation-related factor

Details about the CCM being well understood was minimal to somewhat present with an unclear influence on CCM sustainability.

“We’ve … been trying to help our providers see the benefit of team-based care and the episodes-of-care idea, and I would say that is something our folks really have continued to struggle with as well,” (Participant 401)

said a participant from a high sustainability site. “What is considered CCM-based care?” continued to be a question on providers’ minds. A participant from a high sustainability site asked during the interview,

“Is there kind of a clearing house of some of the best practices for [CCM] that you guys have … or some other collection of resources that we could draw from?” (Participant 601)

Although such references are indeed accessible online organization-wide, participants were not always aware of those resources or what exactly CCM entails.

Recipients-related factor

Interdisciplinary coordination was somewhat present with a hindering, neutral, or unclear influence on CCM sustainability. Coordination between psychotherapy and psychiatry providers was deemed difficult by participants from high-sustainability sites. A participant said,

“We were initially kind of top heavy on the psychiatry so just making sure we have … therapy staff balancing that out [has been important].” (Participant 501)

Another participant perceived that BHIP teams were helpful in managing.

… ‘sibling rivalry’ between different disciplines … because [CCM] puts us all in one team and we communicate.” (Participant 505)

Interdisciplinary coordination was understood by the participants as being necessary for effective CCM-based care yet difficult to achieve.

Context-related factor

Having adequate clinic space for CCM team members was minimal to somewhat present with a hindering, neutral, or unclear influence on CCM sustainability. COVID-19 led to changes in how clinic space was used/assigned. A participant from a high sustainability site remarked,

“Pre-COVID everything was in a room instead of online. And now all our meetings are online and so it's actually really easy for the supervisors to be able to rotate through them and then, you know, they can answer programmatic questions ….” (Participant 402)

Participants from another high sustainability site found that issues regarding limited clinic space were both exacerbated and alleviated by COVID, with the mental health service losing space to vaccine clinics but more mental health clinicians teleworking and in less need of clinic space. Virtual connections were seen to alleviate some physical workspace-related concerns.

Having a knowledgeable and helpful external facilitator was variably present; when present, it had an enabling influence on CCM sustainability. Participants from a high sustainability site noted how many of the external facilitator’s efforts to change the BHIP team’s work processes very much remained over time. An example of a change was to have team meetings be structured to meet evolving patient needs. Team members came to meetings with the shared knowledge and expectation that,

“… we need to touch on folks who are coming out of the hospital, we need to touch on folks with higher acuity needs.” (Participant 402)

Implementation support that sites received from their external facilitator mostly occurred during the time period of the original CCM implementation trial; correspondence with the external facilitator after that trial time period was not common for sites. Participants still largely found the external facilitator to provide helpful guidance and advice on delivering CCM-based care.

Influencing factors identified only for sites with medium or low CCM sustainability

Two factors were identified for medium or low sustainability sites and not for high sustainability sites. They were the factors Ability to continue patient care during COVID and Adequate resources/capacity for care delivery . These factors were both under i-PARIHS’ Context domain, unlike the influencing factors above that were identified only for high sustainability sites, which spanned all four i-PARIHS domains.

Context-related factors

Ability to continue patient care during COVID had a hindering influence on CCM sustainability when minimally present. Participants felt that their CCM work was challenged when delivering care through telehealth was made difficult—e.g., at a medium sustainability site, site policies during the pandemic required a higher number of in-person services than the BHIP team providers expected or desired to deliver. On the other hand, this factor had an enabling influence on CCM sustainability when present. A participant at a low sustainability site mentioned the effect of telehealth on being able to follow up more easily with patients who did not show up for their appointments:

“… my no-show rate has dropped dramatically because if people don’t log on after a couple minutes, I call them. They're like ‘oh, I forgot, let me pop right on,’ whereas, you know, in the face-to-face space, you know, you wait 15 minutes, you call them, it’s too late for them to come in so then they're no shows.” (Participant 102)

The advantages of virtual care delivery, as well as the challenges of getting approvals to pursue it to varying extents, were well recognized by the participants.

Adequate resources/capacity for care delivery was minimally present at medium sustainability sites with a hindering influence on CCM sustainability. At a medium sustainability site, although leadership was supportive of CCM, resources were being used to keep clinics operational (especially during COVID) rather than investing in building new CCM-based care delivery processes.

“I think that if my boss came to me, [and asked] what could I do for [the clinics] … I would say even more staff,” (Participant 202)

said a participant from a medium sustainability site. At the same time, the participant, as many others we interviewed, understood and emphasized the need for BHIP teams to proceed with care delivery even when resources were limited:

“… when you’re already dealing with a very busy clinic, short staff and then you’re hit with a pandemic you handle it the best that you can.” (Participant 202)

Participants felt the need for basic resource requirements to be met in order for CCM-based care to be feasible.

In this project, we examined factors influencing the sustainability of CCM-aligned care practices at general mental health clinics within nine VA medical centers that previously participated in a CCM implementation trial. Guided by the core CCM elements and i-PARIHS domains, we conducted and analyzed CCM provider interviews. Using MMCS, we found CCM sustainability to be split evenly across the nine sites (three high, three medium, and three low), and that sustainability may be related most strongly to provider collaboration, knowledge retention during staff/leadership transitions, and availability of skilled internal facilitators.

In comparison to most high sustainability sites, participants from most medium or low sustainability sites did not mention a knowledgeable and helpful external facilitator who enabled sustainability. Participants at the high sustainability sites also emphasized the need for clarity about what CCM-based care comprises, interdisciplinary coordination in delivering CCM-aligned care, and adequate clinic space for BHIP team members to connect and collaborate. In contrast, in comparison to participants at most high sustainability sites, participants at most medium or low sustainability sites emphasized the need for better continuity of patient-facing activities during the COVID-19 pandemic and more resources/capacity for care delivery. A notable difference between these two groups of influencing factors is that the ones emphasized at most high sustainability sites are more CCM-specific (e.g., external facilitator with CCM expertise, knowledge, and structures to support delivery of CCM-aligned care), while the ones emphasized at most medium or low sustainability sites are factors that certainly relate to CCM sustainability but are focused on care delivery operations beyond CCM-aligned care (e.g., COVID’s widespread impacts, limited staff availability). In short, an emphasis on immediate, short-term clinical needs in the face of the COVID-19 pandemic and staffing challenges appeared to sap sites’ enthusiasm for sustaining more collaborative, CCM-consistent care processes.

Our previous qualitative analysis of these interview data suggested that in order to achieve sustainability, it is important to establish appropriate infrastructure, organizational readiness, and mental health service- or department-wide coordination for CCM implementation [ 10 ]. The findings from the current project augment these previous findings by highlighting the specific factors associated with higher and lower CCM sustainability across the project sites. This additional knowledge provides two important insights into what CCM implementation efforts should prioritize with regard to the previously recommended appropriate infrastructure, readiness, and coordination. First, for knowledge retention and coordination during personnel changes (including any changes in internal facilitators through and following implementation), care processes and their specific procedures should be established and documented in order to bring new personnel up to speed on those care processes. Management sciences, as applied to health care and other fields, suggest that such organizational knowledge retention can be maximized when there are (i) structures set up to formally recognize/praise staff when they share key knowledge, (ii) succession plans to be applied in the event of staff turnover, (iii) opportunities for mentoring and shadowing, and (iv) after action reviews of conducted care processes, which allow staff to learn about and shape the processes themselves [ 30 , 31 , 32 , 33 ]. Future CCM implementation efforts may thus benefit from enacting these suggestions alongside establishing and documenting CCM-based care processes and associated procedures.

Second, efforts to implement CCM-aligned practices into routine care should account for the extent to which sites’ more fundamental operational needs are met or being addressed. That information can be used to appropriately scope the plan, expectations, and timeline for implementation. For instance, ongoing critical staffing shortages or high turnover [ 34 ] at a site are unlikely to be resolved through a few months of CCM implementation. In fact, in that situation, it is possible that CCM implementation efforts could lead to reduced team effectiveness in the short term, given the effort required to establish more collaborative and coordinated care processes [ 35 ]. Should CCM implementation move forward at a given site, implementation goals ought to be set on making progress in realms that are within the implementation effort’s control (e.g., designing CCM-aligned practices that take staffing challenges into consideration) [ 36 , 37 ] rather than on factors outside of the effort’s control (e.g., staffing shortages). As healthcare systems determine how to deploy support (e.g., facilitators) to sites for CCM implementation, they would benefit from considering whether it is primarily CCM expertise that the site needs at the moment, or more foundational organizational resources (e.g., mental health staffing, clinical space, leadership enhancement) [ 38 ] to first reach an operational state that can most benefit from CCM implementation efforts at a later point in time. There is growing consensus across the field that the readiness of a healthcare organization to innovate is a prerequisite to successful innovation (e.g., CCM implementation) regardless of the specific innovation [ 39 , 40 ]. Several promising strategies specifically target these organizational considerations for implementing evidence-based practices (e.g., [ 41 , 42 ]). Further, recent works have begun to more clearly delineate leadership-related, climate-related, and other contextual factors that contribute to organizations’ innovation readiness [ 43 ], which can inform healthcare systems’ future decisions regarding preparatory work leading to, and timing of, CCM implementation at their sites.

These considerations informed by MMCS may have useful implications for implementation strategy selection and tailoring for future CCM implementation efforts, especially in delineating the target level (e.g., system, organizational, clinic, individual) and timeline of implementation strategies to be deployed. For instance, of the three factors found to most notably trend with CCM sustainability, Collaborativeness and teamwork may be strengthened through shorter-term team-building interventions at the organizational and/or clinic levels [ 38 ], Turnover of clinic staff and leadership may be mitigated by aiming for longer-term culture/climate change at the system and/or organizational levels [ 44 , 45 , 46 ], and Having a consistent and strong internal facilitator may be ensured more immediately by selecting an individual with fitting expertise/characteristics to serve in the role [ 15 ] and imparting innovation/facilitation knowledge to them [ 47 ]. Which of these factors to focus on, and through what specific strategies, can be decided in partnership with an implementation site—for instance, candidate strategies can be identified based on ones that literature points to for addressing these factors [ 48 ], systematic selection of the strategies to move forward can happen with close input from site personnel [ 49 ], and explicit further specification of those strategies [ 50 ] can also happen in collaboration with site personnel to amply account for site-specific contexts [ 51 ].

As is common for implementation projects, the findings of this project are highly context-dependent. It involves the implementation of a specific evidence-based practice (the CCM) using a specific implementation strategy (implementation facilitation) at specific sites (BHIP teams within general mental health clinics at nine VA medical centers). For such context-dependent findings to be transferable [ 52 , 53 ] to meaningfully inform future implementation efforts, sources of variation in the findings and how the findings were reached must be documented and traceable. This means being explicit about each step and decision that led up to cross-site analysis, as MMCS encourages, so that future implementation efforts can accurately view and consider why and how findings might be transferable to their own work. For instance, beyond the finding that Turnover of clinic staff and leadership was a factor present at most of the examined sites, MMCS’ traceable documentation of qualitative data associated with this factor at high sustainability sites also allowed highlighting the perception that CCM implementation is contributing to mitigating turnover of providers in the clinic over time, which may be a crucial piece of information that fuels future CCM implementation efforts.

Furthermore, to compare findings and interpretations across projects, consistent procedures for setting up and conducting these multi-site investigations are indispensable [ 54 , 55 , 56 ]. Although many projects involve multiple sites and assess variations across the sites, it is less common to have clearly delineated protocols for conducting such assessments. MMCS is meant to target this very gap, by offering a formalized sequence of steps that prompt specification of analytical procedures and decisions that are often interpretive and left less specified. MMCS uses a concrete data structure (the matrix) to traceably organize information and knowledge gained from a project, and the matrix can accommodate various data sources and conceptual groundings (e.g., guiding theories, models, and frameworks) that may differ from project to project – for instance, although our application of MMCS aligned to i-PARIHS, other projects applying MMCS [ 2 , 5 ] use different conceptual guides (e.g., Consolidated Framework for Implementation Research [ 57 ], Theoretical Domains Framework [ 58 ]). Therefore, as more projects align to the MMCS steps [ 1 ] to identify factors related to implementation and sustainability, better comparisons, consolidations, and transfers of knowledge between projects may become possible.

This project has several limitations. First, the high, medium, and low sustainability assigned to the sites were based on the sites’ CCM sustainability relative to one another, rather than based on an external metric of sustainability. As measures of sustainability such as the Program Sustainability Assessment Tool [ 59 , 60 ] and the Sustainment Measurement System Scale [ 61 ] become increasingly developed and tested, future projects may consider the feasibility of incorporating such measures to assess each site’s sustainability. In our case, we worked on addressing this limitation by using a consensus approach within our project team to assign sustainability levels to sites, as well as by confirming that the sites that we designated as high sustainability exhibited CCM elements that we had previously observed at the end of their participation in the original CCM implementation trial [ 19 ]. Second, we did not assign strict thresholds above/below which the counts or proportions of data regarding a factor would automatically indicate whether the factor (i) was present, somewhat present, or minimally present and (ii) had an enabling, hindering, or neutral/unclear influence on sustainability. This follows widely accepted qualitative analytical guidance that discourages characterizing findings solely based on the frequency with which a notion is mentioned by participants [ 62 , 63 , 64 ], in order to prevent unsubstantiated inferences or conclusions. We sought to address this limitation in two ways: We carefully documented the project team’s rationale for each consensus reached, and we reviewed all consensuses reached in their entirety to ensure that any two factors with the same designation (e.g., “minimally present”) do not have associated rationale that conflict across those factors. These endeavors we undertook closely adhere to established case study research methods [ 65 ], which MMCS builds on, that emphasize strengthening the validity and reliability of findings through documenting a detailed analytic protocol, as well as reviewing data to ensure that patterns match across analytic units (e.g., factors, interviewees, sites). Third, our findings are based on three sites each for high/medium/low sustainability, and although we identified single factors associated with sustainability, we found no specific combinations of factors’ presence and influence that were repeatedly existent at a majority of the sites designated as high/medium/low sustainability. Examining additional sites on the factors identified through this work (as we will for our subsequent CCM implementation trial described below) will allow more opportunities for repeated combinations and other factors to emerge, making possible firmer conclusions regarding the extent to which the currently identified factors and absence of identified combinations are applicable beyond the sites included in this study. Fourth, the identified influencing factor “leadership support for CCM” (under the Context domain of the i-PARIHS framework) substantially overlaps in concept with the core “organizational/leadership support” element of the CCM. To avoid circular reasoning, we used leadership support-related data to inform our assignment of sites’ high, medium, or low CCM sustainability, rather than as a reason for the sites’ CCM sustainability. In reality, strong leadership support may both result from and contribute to implementation and sustainability [ 16 , 66 ], and thus causal relationships between the i-PARIHS-aligned influencing factors and the CCM elements (possibly with feedback loops) warrant further examination to most appropriately use leadership support-related data in future analyses of CCM sustainability. Fifth, findings may be subject to both social desirability bias in participants providing more positive than negative evidence of sustainability (especially participants who are responsible for implementing and sustaining CCM-aligned care at their site) and the project team members’ bias in interpreting the findings to align to their expectations of further effort being necessary to sustainably implement the CCM. To help mitigate this challenge, the project interviewers strove to elicit from participants both positive and negative perceptions and experiences related to CCM-based care delivery, both of which were present in the examined interview data.

Future work stemming from this project is twofold. Regarding CCM implementation, we will conduct a subsequent CCM implementation trial involving eight new sites to prospectively examine how implementation facilitation with an enhanced focus on these findings affects CCM sustainability. We started planning for sustainability prior to implementation, looking to this work for indicators of specific modifications needed to the previous way in which we used implementation facilitation to promote the uptake of CCM-based care [ 67 ]. Findings from this work suggest that sustainability may be related most strongly to (i) provider collaboration, (ii) knowledge retention during staff/leadership transitions, and (iii) availability of skilled internal facilitators. Hence, we will accordingly prioritize developing procedures for (i) regular CCM-related information exchange amongst BHIP team members, as well as between the BHIP team and clinic leadership, (ii) both translating knowledge to and keeping knowledge documented at the site, and (iii) supporting the sites’ own personnel to take the lead in driving CCM implementation.

Regarding MMCS, we will continuously refine and improve the method by learning from other projects applying, testing, and critiquing MMCS. Outside of our CCM-related projects, examinations of implementation data using MMCS are actively underway for various implementation efforts including that of a data dashboard for decision support on transitioning psychiatrically stable patients from specialty mental health to primary care [ 2 ], a peer-led healthy lifestyle intervention for individuals with serious mental illness [ 3 ], screening programs for intimate partner violence [ 4 ], and a policy- and organization-based health system strengthening intervention to improve health systems in sub-Saharan Africa [ 5 ]. As MMCS is used by more projects that differ from one another in their specific outcome of interest, and especially in light of our MMCS application that examines factors related to sustainability, we are curious whether certain proximal to distal outcomes are more subject to heterogeneity in influencing factors than other outcomes. For instance, sustainability outcomes, which are tracked following a longer passage of time than some other outcomes, may be subject to more contextual variations that occur over time and thus could particularly benefit from being examined using MMCS. We will also explore MMCS’ complementarity with coincidence analysis and other configurational analytical approaches [ 68 ] for examining implementation phenomena. We are excited about both the step-by-step traceability that MMCS can bring to such methods and those methods’ computational algorithms that can be beneficial to incorporate into MMCS for projects with larger numbers of sites. For example, Salvati and colleagues [ 69 ] described both the inspiration that MMCS provided in structuring their data as well as how they addressed MMCS’ visualization shortcomings through their innovative data matrix heat mapping, which led to their selection of specific factors to include in their subsequent coincidence analysis. Coincidence analysis is an enhancement to qualitative comparative analysis and other configurational analytical methods, in that it is formulated specifically for causal inference [ 70 ]. Thus, in considering improved reformulations of MMCS’ steps to better characterize examined factors as explicit causes to the outcomes of interest, we are inspired by and can draw on coincidence analysis’ approach to building and evaluating causal chains that link factors to outcomes. Relatedly, we have begun to actively consider the potential contribution that MMCS can make to hypothesis generation and theory development for implementation science. As efforts to understand the mechanisms through which implementation strategies work are gaining momentum [ 71 , 72 , 73 ], there is an increased need for methods that help decompose our understanding of factors that influence the mechanistic pathways from strategies to outcomes [ 74 ]. Implementation science is facing the need to develop theories, beyond frameworks, which delineate hypotheses for observed implementation phenomena that can be subsequently tested [ 75 ]. The methodical approach that MMCS offers can aid this important endeavor, by enabling data curation and examination of pertinent factors in a consistent way that allows meaningful synthesis of findings across sites and studies. We see these future directions as concrete steps toward elucidating the factors related to sustainable implementation of EBPs, especially leveraging data from projects where the number of sites is much smaller than the number of factors that may matter—which is indeed the case for most implementation projects.

Using MMCS, we found that provider collaboration, knowledge retention during staff/leadership transitions, and availability of skilled internal facilitators may be most strongly related to CCM sustainability in VA outpatient mental health clinics. Informed by these findings, we have a subsequent CCM implementation trial underway to prospectively test whether increasing the aforementioned factors within implementation facilitation enhances sustainability. The MMCS steps used here for systematic multi-site examination can also be applied to determining sustainability-related factors relevant to various other EBPs and implementation contexts.

Availability of data and materials

The data analyzed during the current project are not publicly available because participant privacy could be compromised.

Abbreviations

Behavioral Health Interdisciplinary Program

Collaborative Chronic Care Model

Consolidated Criteria for Reporting Qualitative Research

coronavirus disease

evidence-based practice

Institutional Review Board

Integrated Promoting Action on Research Implementation in Health Services

Matrixed Multiple Case Study

United States Department of Veterans Affairs

Kim B, Sullivan JL, Ritchie MJ, Connolly SL, Drummond KL, Miller CJ, et al. Comparing variations in implementation processes and influences across multiple sites: What works, for whom, and how? Psychiatry Res. 2020;283:112520.

Article   PubMed   Google Scholar  

Hundt NE, Yusuf ZI, Amspoker AB, Nagamoto HT, Kim B, Boykin DM, et al. Improving the transition of patients with mental health disorders back to primary care: A protocol for a partnered, mixed-methods, stepped-wedge implementation trial. Contemp Clin Trials. 2021;105:106398.

Tuda D, Bochicchio L, Stefancic A, Hawes M, Chen J-H, Powell BJ, et al. Using the matrixed multiple case study methodology to understand site differences in the outcomes of a Hybrid Type 1 trial of a peer-led healthy lifestyle intervention for people with serious mental illness. Transl Behav Med. 2023;13(12):919–27.

Adjognon OL, Brady JE, Iverson KM, Stolzmann K, Dichter ME, Lew RA, et al. Using the Matrixed Multiple Case Study approach to identify factors affecting the uptake of IPV screening programs following the use of implementation facilitation. Implement Sci Commun. 2023;4(1):145.

Article   PubMed   PubMed Central   Google Scholar  

Seward N, Murdoch J, Hanlon C, Araya R, Gao W, Harding R, et al. Implementation science protocol for a participatory, theory-informed implementation research programme in the context of health system strengthening in sub-Saharan Africa (ASSET-ImplementER). BMJ Open. 2021;11(7):e048742.

Bauer MS, Miller C, Kim B, Lew R, Weaver K, Coldwell C, et al. Partnering with health system operations leadership to develop a controlled implementation trial. Implement Sci. 2016;11:22.

Bauer MS, Miller CJ, Kim B, Lew R, Stolzmann K, Sullivan J, et al. Effectiveness of implementing a Collaborative Chronic Care Model for clinician teams on patient outcomes and health status in mental health: a randomized clinical trial. JAMA Netw Open. 2019;2(3):e190230.

Ritchie MJ, Dollar KM, Miller CJ, Smith JL, Oliver KA, Kim B, et al. Using Implementation Facilitation to Improve Healthcare (Version 3): Veterans Health Administration, Behavioral Health Quality Enhancement Research Initiative (QUERI). 2020.

Google Scholar  

Bauer MS, Stolzmann K, Miller CJ, Kim B, Connolly SL, Lew R. Implementing the Collaborative Chronic Care Model in mental health clinics: achieving and sustaining clinical effects. Psychiatr Serv. 2021;72(5):586–9.

Miller CJ, Kim B, Connolly SL, Spitzer EG, Brown M, Bailey HM, et al. Sustainability of the Collaborative Chronic Care Model in outpatient mental health teams three years post-implementation: a qualitative analysis. Adm Policy Ment Health. 2023;50(1):151–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Von Korff M, Gruman J, Schaefer J, Curry SJ, Wagner EH. Collaborative management of chronic illness. Ann Intern Med. 1997;127(12):1097–102.

Article   Google Scholar  

Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74(4):511–44.

Article   CAS   PubMed   Google Scholar  

Coleman K, Austin BT, Brach C, Wagner EH. Evidence on the chronic care model in the new millennium. Health Aff (Millwood). 2009;28(1):75–85.

Connolly SL, Sullivan JL, Ritchie MJ, Kim B, Miller CJ, Bauer MS. External facilitators’ perceptions of internal facilitation skills during implementation of collaborative care for mental health teams: a qualitative analysis informed by the i-PARIHS framework. BMC Health Serv Res. 2020;20(1):165.

Kim B, Sullivan JL, Drummond KL, Connolly SL, Miller CJ, Weaver K, et al. Interdisciplinary behavioral health provider perceptions of implementing the Collaborative Chronic Care Model: an i-PARIHS-guided qualitative study. Implement Sci Commun. 2023;4(1):35.

Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11:33.

Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.

Sullivan JL, Kim B, Miller CJ, Elwy AR, Drummond KL, Connolly SL, et al. Collaborative Chronic Care Model implementation within outpatient behavioral health care teams: qualitative results from a multisite trial using implementation facilitation. Implement Sci Commun. 2021;2(1):33.

Miller CJ, Sullivan JL, Kim B, Elwy AR, Drummond KL, Connolly S, et al. Assessing collaborative care in mental health teams: qualitative analysis to guide future implementation. Adm Policy Ment Health. 2019;46(2):154–66.

Miles MB, Huberman AM. Qualitative data analysis: an expanded sourcebook: sage. 1994.

Jones J, Hunter D. Consensus methods for medical and health services research. BMJ. 1995;311(7001):376–80.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Bens I. Facilitating with Ease!: core skills for facilitators, team leaders and members, managers, consultants, and trainers. Hoboken: John Wiley & Sons; 2017.

Ritchie MJ, Drummond KL, Smith BN, Sullivan JL, Landes SJ. Development of a qualitative data analysis codebook informed by the i-PARIHS framework. Implement Sci Commun. 2022;3(1):98.

Excel: Microsoft. Available from: https://www.microsoft.com/en-us/microsoft-365/excel . Accessed 15 Feb 2024.

Madrigal L, Manders OC, Kegler M, Haardörfer R, Piper S, Blais LM, et al. Inner and outer setting factors that influence the implementation of the National Diabetes Prevention Program (National DPP) using the Consolidated Framework for Implementation Research (CFIR): a qualitative study. Implement Sci Commun. 2022;3(1):104.

Wilson HK, Wieler C, Bell DL, Bhattarai AP, Castillo-Hernandez IM, Williams ER, et al. Implementation of the Diabetes Prevention Program in Georgia Cooperative Extension According to RE-AIM and the Consolidated Framework for Implementation Research. Prev Sci. 2023;Epub ahead of print.

Proctor E, Luke D, Calhoun A, McMillen C, Brownson R, McCrary S, et al. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10:88.

Fathi LI, Walker J, Dix CF, Cartwright JR, Joubert S, Carmichael KA, et al. Applying the Integrated Sustainability Framework to explore the long-term sustainability of nutrition education programmes in schools: a systematic review. Public Health Nutr. 2023;26(10):2165–79.

Guptill J. Knowledge management in health care. J Health Care Finance. 2005;31(3):10–4.

PubMed   Google Scholar  

Gammelgaard J. Why not use incentives to encourage knowledge sharing. J Knowledge Manage Pract. 2007;8(1):115–23.

Liebowitz J. Knowledge retention: strategies and solutions. Boca Raton: CRC Press; 2008.

Ensslin L, CarneiroMussi C, RolimEnsslin S, Dutra A, Pereira Bez Fontana L. Organizational knowledge retention management using a constructivist multi-criteria model. J Knowledge Manage. 2020;24(5):985–1004.

Peterson AE, Bond GR, Drake RE, McHugo GJ, Jones AM, Williams JR. Predicting the long-term sustainability of evidence-based practices in mental health care: an 8-year longitudinal analysis. J Behav Health Serv Res. 2014;41(3):337–46.

Miller CJ, Griffith KN, Stolzmann K, Kim B, Connolly SL, Bauer MS. An economic analysis of the implementation of team-based collaborative care in outpatient general mental health clinics. Med Care. 2020;58(10):874–80.

Silver SA, Harel Z, McQuillan R, Weizman AV, Thomas A, Chertow GM, et al. How to begin a quality improvement project. Clin J Am Soc Nephrol. 2016;11(5):893–900.

Dixon-Woods M. How to improve healthcare improvement-an essay by Mary Dixon-Woods. BMJ. 2019;367:l5514.

Miller CJ, Kim B, Silverman A, Bauer MS. A systematic review of team-building interventions in non-acute healthcare settings. BMC Health Serv Res. 2018;18(1):146.

Robert G, Greenhalgh T, MacFarlane F, Peacock R. Organisational factors influencing technology adoption and assimilation in the NHS: a systematic literature review. Report for the National Institute for Health Research Service Delivery and Organisation programme. London; 2009.

Kelly CJ, Young AJ. Promoting innovation in healthcare. Future Healthc J. 2017;4(2):121–5.

PubMed   PubMed Central   Google Scholar  

Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10:11.

Ritchie MJ, Parker LE, Kirchner JE. Facilitating implementation of primary care mental health over time and across organizational contexts: a qualitative study of role and process. BMC Health Serv Res. 2023;23(1):565.

van den Hoed MW, Backhaus R, de Vries E, Hamers JPH, Daniëls R. Factors contributing to innovation readiness in health care organizations: a scoping review. BMC Health Serv Res. 2022;22(1):997.

Melnyk BM, Hsieh AP, Messinger J, Thomas B, Connor L, Gallagher-Ford L. Budgetary investment in evidence-based practice by chief nurses and stronger EBP cultures are associated with less turnover and better patient outcomes. Worldviews Evid Based Nurs. 2023;20(2):162–71.

Jacob RR, Parks RG, Allen P, Mazzucca S, Yan Y, Kang S, et al. How to “start small and just keep moving forward”: mixed methods results from a stepped-wedge trial to support evidence-based processes in local health departments. Front Public Health. 2022;10:853791.

Aarons GA, Conover KL, Ehrhart MG, Torres EM, Reeder K. Leader-member exchange and organizational climate effects on clinician turnover intentions. J Health Organ Manag. 2020;35(1):68–87.

Kirchner JE, Ritchie MJ, Pitcock JA, Parker LE, Curran GM, Fortney JC. Outcomes of a partnered facilitation strategy to implement primary care-mental health. J Gen Intern Med. 2014;29 Suppl 4(Suppl 4):904–12.

Strategy Design: CFIR research team-center for clinical management research. Available from: https://cfirguide.org/choosing-strategies/ . Accessed 15 Feb 2024.

Kim B, Wilson SM, Mosher TM, Breland JY. Systematic decision-making for using technological strategies to implement evidence-based interventions: an illustrated case study. Front Psychiatry. 2021;12:640240.

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.

Lewis CC, Scott K, Marriott BR. A methodology for generating a tailored implementation blueprint: an exemplar from a youth residential setting. Implement Sci. 2018;13(1):68.

Maher C, Hadfield M, Hutchings M, de Eyto A. Ensuring rigor in qualitative data analysis: a design research approach to coding combining NVivo with traditional material methods. Int J Qual Methods. 2018;17(1):1609406918786362.

Holloway I. A-Z of qualitative research in healthcare. 2nd ed. Oxford: Wiley-Blackwell; 2008.

Reproducibility and Replicability in Research: National Academies. Available from: https://www.nationalacademies.org/news/2019/09/reproducibility-and-replicability-in-research . Accessed 15 Feb 2024.

Chinman M, Acosta J, Ebener P, Shearer A. “What we have here, is a failure to [Replicate]”: ways to solve a replication crisis in implementation science. Prev Sci. 2022;23(5):739–50.

Vicente-Saez R, Martinez-Fuentes C. Open Science now: a systematic literature review for an integrated definition. J Bus Res. 2018;88:428–36.

Consolidated Framework for Implementation Research: CFIR Research Team-Center for Clinical Management Research. Available from: https://cfirguide.org/ . Accessed 15 Feb 2024.

Atkins L, Francis J, Islam R, O’Connor D, Patey A, Ivers N, et al. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems. Implement Sci. 2017;12(1):77.

Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:130184.

Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the Program Sustainability Assessment Tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11:130185.

Palinkas LA, Chou CP, Spear SE, Mendon SJ, Villamar J, Brown CH. Measurement of sustainment of prevention programs and initiatives: the sustainment measurement system scale. Implement Sci. 2020;15(1):71.

Sandelowski M. Real qualitative researchers do not count: the use of numbers in qualitative research. Res Nurs Health. 2001;24(3):230–40.

Wood M, Christy R. Sampling for Possibilities. Qual Quant. 1999;33(2):185–202.

Chang Y, Voils CI, Sandelowski M, Hasselblad V, Crandell JL. Transforming verbal counts in reports of qualitative descriptive studies into numbers. West J Nurs Res. 2009;31(7):837–52.

Yin RK. Case study research and applications. Los Angeles: Sage; 2018.

Bauer MS, Weaver K, Kim B, Miller C, Lew R, Stolzmann K, et al. The Collaborative Chronic Care Model for mental health conditions: from evidence synthesis to policy impact to scale-up and spread. Med Care. 2019;57 Suppl 10 Suppl 3(10 Suppl 3):S221-s7.

Miller CJ, Sullivan JL, Connolly SL, Richardson EJ, Stolzmann K, Brown ME, et al. Adaptation for sustainability in an implementation trial of team-based collaborative care. Implement Res Pract. 2024;5:26334895231226197.

Curran GM, Smith JD, Landsverk J, Vermeer W, Miech EJ, Kim B, et al. Design and analysis in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health: Translating Science to Practice. 3 ed. New York: Oxford University Press; In press.

Salvati ZM, Rahm AK, Williams MS, Ladd I, Schlieder V, Atondo J, et al. A picture is worth a thousand words: advancing the use of visualization tools in implementation science through process mapping and matrix heat mapping. Implement Sci Commun. 2023;4(1):43.

Whitaker RG, Sperber N, Baumgartner M, Thiem A, Cragun D, Damschroder L, et al. Coincidence analysis: a new method for causal inference in implementation science. Implement Sci. 2020;15(1):108.

Lewis CC, Powell BJ, Brewer SK, Nguyen AM, Schriger SH, Vejnoska SF, et al. Advancing mechanisms of implementation to accelerate sustainable evidence-based practice integration: protocol for generating a research agenda. BMJ Open. 2021;11(10):e053474.

Kilbourne AM, Geng E, Eshun-Wilson I, Sweeney S, Shelley D, Cohen DJ, et al. How does facilitation in healthcare work? Using mechanism mapping to illuminate the black box of a meta-implementation strategy. Implement Sci Commun. 2023;4(1):53.

Kim B, Cruden G, Crable EL, Quanbeck A, Mittman BS, Wagner AD. A structured approach to applying systems analysis methods for examining implementation mechanisms. Implementation Sci Commun. 2023;4(1):127.

Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19(2):e1003918.

Luke DA, Powell BJ, Paniagua-Avila A. Bridges and mechanisms: integrating systems science thinking into implementation research. Annu Rev Public Health. In press.

Download references

Acknowledgements

The authors sincerely thank the project participants for their time, as well as the project team members for their guidance and support. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs or the United States government.

This project was funded by VA grant QUE 20–026 and was designed and conducted in partnership with the VA Office of Mental Health and Suicide Prevention.

Author information

Authors and affiliations.

Center for Healthcare Organization and Implementation Research (CHOIR), VA Boston Healthcare System, 150 South Huntington Avenue, Boston, MA, 02130, USA

Bo Kim, Madisen E. Brown, Samantha L. Connolly, Elizabeth G. Spitzer, Hannah M. Bailey & Christopher J. Miller

Harvard Medical School, 25 Shattuck Street, Boston, MA, 02115, USA

Bo Kim, Samantha L. Connolly & Christopher J. Miller

Center of Innovation in Long Term Services and Supports (LTSS COIN), VA Providence Healthcare System, 385 Niagara Street, Providence, RI, 02907, USA

Jennifer L. Sullivan

Brown University School of Public Health, 121 South Main Street, Providence, RI, 02903, USA

VA Rocky Mountain Mental Illness Research, Education and Clinical Center (MIRECC), 1700 N Wheeling Street, Aurora, CO, 80045, USA

Elizabeth G. Spitzer

VA Northeast Program Evaluation Center, 950 Campbell Avenue, West Haven, CT, 06516, USA

Lauren M. Sippel

Geisel School of Medicine at Dartmouth, 1 Rope Ferry Road, Hanover, NH, 03755, USA

VA Office of Mental Health and Suicide Prevention, 810 Vermont Avenue NW, Washington, DC, 20420, USA

Kendra Weaver

You can also search for this author in PubMed   Google Scholar

Contributions

Concept and design: BK, JS, and CM. Acquisition, analysis, and/or interpretation of data: BK, JS, MB, SC, ES, and CM. Initial drafting of the manuscript: BK. Critical revisions of the manuscript for important intellectual content: JS, MB, SC, ES, HB, LS, KW, and CM. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Bo Kim .

Ethics declarations

Ethics approval and consent to participate.

This project was determined to be non-research by the VA Boston Research and Development Service, and therefore did not require oversight by the Institutional Review Board (IRB).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

COREQ (COnsolidated criteria for REporting Qualitative research) Checklist.

Additional file 2.

Data input, tasks performed, and analysis output for MMCS Steps 5 through 9.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Kim, B., Sullivan, J.L., Brown, M.E. et al. Sustaining the collaborative chronic care model in outpatient mental health: a matrixed multiple case study. Implementation Sci 19 , 16 (2024). https://doi.org/10.1186/s13012-024-01342-2

Download citation

Received : 14 June 2023

Accepted : 21 January 2024

Published : 19 February 2024

DOI : https://doi.org/10.1186/s13012-024-01342-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Collaborative care
  • Implementation
  • Interdisciplinary care
  • Mental health
  • Sustainability

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

case study is a type of quantitative research

Read our research on: Immigration & Migration | Podcasts | Election 2024

Regions & Countries

Migrant encounters at the u.s.-mexico border hit a record high at the end of 2023.

The U.S. Border Patrol had nearly 250,000 encounters with migrants crossing into the United States from Mexico in December 2023, according to government statistics . That was the highest monthly total on record, easily eclipsing the previous peak of about 224,000 encounters in May 2022.

A line chart showing that 2023 ended with more migrant encounters at U.S.-Mexico border than any month on record.

The monthly number of encounters has soared since 2020, when the coronavirus pandemic temporarily forced the U.S.-Mexico border to close and slowed migration across much of the world . In April 2020, the Border Patrol recorded around 16,000 encounters – among the lowest monthly totals in decades.

This Pew Research Center analysis examines migration patterns at the U.S.-Mexico border using  current  and  historical data  from U.S. Customs and Border Protection, the federal agency that includes the U.S. Border Patrol. The analysis is based on a metric known as migrant encounters.

The term “encounters” refers to two distinct types of events:

  • Apprehensions: Migrants are taken into custody in the United States, at least temporarily, to await a decision on whether they can remain in the country legally, such as by being granted asylum. Apprehensions are carried out under  Title 8 of the U.S. code , which deals with immigration law.
  • Expulsions: Migrants are immediately expelled to their home country or last country of transit without being held in U.S. custody. Expulsions are carried out under Title 42 of the U.S. code, a previously  rarely used section of the law  that the Trump administration invoked during the early stages of the COVID-19 pandemic . The law empowers federal health authorities to stop migrants from entering the country if it is determined that barring them could prevent the spread of contagious diseases. The Biden administration stopped the use of Title 42 in May 2023, when the federal government declared an end to the COVID-19 public health emergency .

It is important to note that encounters refer to events, not people, and that some migrants are encountered more than once. As a result, the overall number of encounters may overstate the number of distinct individuals involved.

This analysis is limited to monthly encounters between ports of entry involving the Border Patrol. It excludes encounters at ports of entry involving the Office of Field Operations.

Since then, the monthly number of migrant encounters at the U.S.-Mexico border has surpassed 200,000 on 10 separate occasions. That threshold previously hadn’t been reached since March 2000, when there were about 220,000 encounters.

It’s not clear whether the recent high numbers of encounters at the border will persist in 2024. In January, encounters fell to around 124,000 , according to the latest available statistics.

  • Apprehensions: Migrants are taken into custody in the U.S., at least temporarily, to await a decision on whether they can remain in the country legally, such as by being granted asylum. Apprehensions are carried out under  Title 8 of the U.S. code , which deals with immigration law.

A stacked bar chart showing that use of Title 42 began during coronavirus pandemic and ended in May 2023.

  • Expulsions : Migrants are immediately expelled to their home country or last country of transit without being held in U.S. custody. Expulsions are carried out under Title 42 of the U.S. code, a previously  rarely used section of the law  that the Trump administration invoked during the early stages of the COVID-19 pandemic. The law empowers federal health authorities to stop migrants from entering the country if it is determined that barring them could prevent the spread of contagious diseases. In the early months of the pandemic in the U.S., the Border Patrol relied heavily on Title 42 to expel most of the migrants it encountered at the border. The Biden administration stopped the use of Title 42 in May 2023, when the federal government declared an end to the COVID-19 public health emergency . Since then, the Border Patrol has been apprehending migrants within the U.S. instead of expelling them from the country.

Related:  Key facts about Title 42, the pandemic policy that has reshaped immigration enforcement at U.S.-Mexico border

Who is crossing the U.S.-Mexico border?

An area chart showing that a growing share of migrant encounters involve people traveling in families.

In December 2023, most encounters at the U.S.-Mexico border (54%) involved migrants traveling as single adults, while 41% involved people traveling in families and 5% involved unaccompanied minors.

In recent months, a growing number of encounters have involved people traveling in families. In December 2023, the Border Patrol had nearly 102,000 encounters with family members, up from around 61,000 a year earlier.

There has also been a shift in migrants’ origin countries. Historically, most encounters at the southwestern border have involved citizens of Mexico or the Northern Triangle nations of El Salvador, Guatemala and Honduras. But in December 2023, 54% of encounters involved citizens of countries other than these four nations.

An area chart showing that most border encounters now involve people from countries other than Mexico and Northern Triangle.

Venezuelans, in particular, stand out. Nearly 47,000 migrant encounters in December 2023 involved citizens of Venezuela, up from about 6,000 a year earlier. The number of encounters involving Venezuelans was second only to the approximately 56,000 involving Mexicans in December 2023.

There has also been a sharp increase in encounters with citizens of China, despite its distance from the U.S.-Mexico border. The Border Patrol reported nearly 6,000 encounters with Chinese citizens at the southwestern border in December 2023, up from around 900 a year earlier.

How do Americans view the situation at the border?

The American public is broadly dissatisfied with how things are going at the border, according to a new Pew Research Center survey .

Eight-in-ten U.S. adults say the government is doing a very or somewhat bad job dealing with the large number of migrants seeking to enter the U.S. at the border with Mexico. And nearly as many say the situation is either a “crisis” (45%) or a “major problem” (32%) for the U.S.

Note: This is an update of a post originally published on March 15, 2021.

case study is a type of quantitative research

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

What’s happening at the U.S.-Mexico border in 7 charts

Most americans are critical of government’s handling of situation at u.s.-mexico border, after surging in 2019, migrant apprehensions at u.s.-mexico border fell sharply in fiscal 2020, how border apprehensions, ice arrests and deportations have changed under trump, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

IMAGES

  1. quantitative case study methodology

    case study is a type of quantitative research

  2. Types of Quantitative Research

    case study is a type of quantitative research

  3. Qualitative V/S Quantitative Research Method: Which One Is Better?

    case study is a type of quantitative research

  4. Qualitative vs. Quantitative Research: Definition and Types

    case study is a type of quantitative research

  5. PPT

    case study is a type of quantitative research

  6. Quantitative Research: Definition, Methods, Types and Examples

    case study is a type of quantitative research

VIDEO

  1. QUANTITATIVE FOR CASE STUDY ABOUT SILENT TREATMENT

  2. Quantitative research process

  3. What is Quantitative Research

  4. Qualitative Research and Case Study

  5. 10 Difference Between Qualitative and Quantitative Research (With Table)

  6. Qualitative vs Quantitative Research

COMMENTS

  1. What Is a Case Study?

    A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used.

  2. Case Study

    An in-depth research design that primarily uses a qualitative methodology but sometimes includes quantitative methodology. Used to examine an identifiable problem confirmed through research. Used to investigate an individual, group of people, organization, or event. Used to mostly answer "how" and "why" questions.

  3. What is a Case Study? Definition & Examples

    A case study is an in-depth investigation of a single person, group, event, or community. This research method involves intensively analyzing a subject to understand its complexity and context. The richness of a case study comes from its ability to capture detailed, qualitative data that can offer insights into a process or subject matter that ...

  4. The case study approach

    In research, the conceptually-related case study approach can be used, for example, to describe in detail a patient's episode of care, explore professional attitudes to and experiences of a new policy initiative or service development or more generally to 'investigate contemporary phenomena within its real-life context' [ 1 ].

  5. Distinguishing case study as a research method from case reports as a

    Distinguishing case study as a research method from case reports as a publication type - PMC - National Center for Biotechnology Information. This article explains the differences and similarities between case studies and case reports, two common forms of qualitative research in health sciences. It also provides guidelines and examples for designing, conducting, and reporting case studies and ...

  6. Case Study

    Defnition: A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation. It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied.

  7. PDF Kurt Schoch I

    Case study research involves a detailed and intensive analysis of a particular event, situation, orga- ... this type of research is most common in edu - cation and other social sciences, as well as in law, political science, and health care. ... Samples obtained for quantitative research . studies are often probability samples that are presumed ...

  8. A Practical Guide to Writing Quantitative and Qualitative Research

    Qualitative case study questions: Open in a separate window. Research questions in quantitative research. ... Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior ...

  9. Case Study Method: A Step-by-Step Guide for Business Researchers

    Case study method is the most widely used method in academia for researchers interested in qualitative research ( Baskarada, 2014 ). Research students select the case study as a method without understanding array of factors that can affect the outcome of their research.

  10. Types of Research within Qualitative and Quantitative

    1. Descriptive researchseeks to describe the current status of an identified variable. These research projects are designed to provide systematic information about a phenomenon. The researcher does not usually begin with an hypothesis, but is likely to develop one after collecting data.

  11. Quantitative Research

    by Muhammad Hassan Table of Contents Quantitative Research Quantitative research is a type of research that collects and analyzes numerical data to test hypotheses and answer research questions. This research typically involves a large sample size and uses statistical analysis to make inferences about a population based on the data collected.

  12. Case Study

    Revised on 30 January 2023. A case study is a detailed study of a specific subject, such as a person, group, place, event, organisation, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes ...

  13. Types of Quantitative Research Methods and Designs

    There are several types of quantitative research designs, such as the experimental, comparative or predictive correlational designs. The approach you should choose depends primarily on your research aims.

  14. Case Studies

    This article describes a one-credit discussion course in research ethics for graduate students in biology. Case studies are focused on within the four parts of the course: 1) major issues, 2 )practical issues in scholarly work, 3) ownership of research results, and 4) training and personal decisions. DeVoss, G. (1981).

  15. Types of Research within Qualitative and Quantitative

    Types of Quantitative Research There are four (4) main types of quantitative designs: descriptive, correlational, quasi-experimental, and experimental. samples.jbpub.com/9780763780586/80586_CH03_Keele.pdf Types of Qualitative Research http://wilderdom.com/OEcourses/PROFLIT/Class6Qualitative1.htm

  16. Case Study Methodology of Qualitative Research: Key Attributes and

    Case Studies are a qualitative design in which the researcher explores in depth a program, event, activity, process, or one or more individuals. The case (s) are bound by time and activity, and researchers collect detailed information using a variety of data collection procedures over a sustained period of time.

  17. What is a Case Study? [+6 Types of Case Studies]

    In the field of psychology, case studies focus on a particular subject. Psychology case histories also examine human behaviors. Case reports search for commonalities between humans. They are also used to prescribe further research. Or these studies can elaborate on a solution for a behavioral ailment.

  18. Quantitative vs. Qualitative Research

    Qualitative research is based upon data that is gathered by observation. Qualitative research articles will attempt to answer questions that cannot be measured by numbers but rather by perceived meaning. Qualitative research will likely include interviews, case studies, ethnography, or focus groups. Indicators of qualitative research include:

  19. Quantitative and Qualitative Approaches to Generalization and

    Whereas quantitative research uses variable-based models that abstract from individual cases, qualitative research favors case-based models that abstract from individual characteristics. Variable-based models are usually stated in the form of quantified sentences (scientific laws). ... The predominant type of analysis in qualitative research ...

  20. (PDF) The case study as a type of qualitative research

    In comparison to other types of qualitative research, case studies have been little understood both from a methodological point of view, where disagreements exist about whether case...

  21. Sustaining the collaborative chronic care model in outpatient mental

    Sustaining evidence-based practices (EBPs) is crucial to ensuring care quality and addressing health disparities. Approaches to identifying factors related to sustainability are critically needed. One such approach is Matrixed Multiple Case Study (MMCS), which identifies factors and their combinations that influence implementation. We applied MMCS to identify factors related to the ...

  22. Migrant encounters at U.S.-Mexico border hit ...

    This Pew Research Center analysis examines migration patterns at the U.S.-Mexico border using current and historical data from U.S. Customs and Border Protection, the federal agency that includes the U.S. Border Patrol. The analysis is based on a metric known as migrant encounters. The term "encounters" refers to two distinct types of events: