Intro to Research Methods.pdf
Intro to Research Methods.pdf
Then comes deductive reasoning, a method of thinking that moves from the
general to the specific. It starts with a broad principle and applies it to a
particular case. For example: all mammals have lungs; a rabbit is a mammal;
therefore, a rabbit has lungs. Deduction is logical, but its reliability
depends completely on the truth of the initial generalization. If the
starting point is flawed, the conclusion will be too.
On the other side, there’s inductive reasoning, which moves from specific
observations to general conclusions. If you observe many rabbits and all of
them have lungs, you might conclude that all rabbits in the world must have
lungs. Induction is powerful for building generalizations from real-world
patterns, but it's also uncertain—because you never know if the next case
will break the pattern.
With that definition in place, we can now speak about educational research,
which is the application of this scientific method to problems in the field of
education. Educational research seeks to improve our understanding of the
teaching and learning process. It involves asking questions about student
behavior, curriculum effectiveness, classroom strategies, teacher
performance, educational policy, and more. It is both practical and
theoretical. The ultimate goal is to discover general principles or
explanations of behavior that educators can use to predict, explain, and
control events in educational contexts.
Educational research is classified into three major types. The first is basic
research (also known as fundamental or pure research). This aims to expand
theoretical knowledge without any immediate application in mind. It is
concerned with discovering principles that have universal validity. It relies on
rigorous methodology, careful sampling, and systematic procedures to
build theories. The second type is applied research, which aims to solve
immediate practical problems. It is conducted in real-life settings like
classrooms and often focuses on the effectiveness of specific interventions or
teaching strategies. Although it may not produce general laws, it is extremely
useful in guiding practice. The third type is action research, which is unique
because it is conducted by practitioners themselves—teachers, principals,
counselors—who investigate their own practices to improve them. It is
collaborative, practical, and often cyclical, involving a process of planning,
acting, observing, and reflecting. The goal is not to build theory but to bring
about local change and professional growth.
To truly understand what research is, we must first define its essential
characteristics. Research, in its most authentic form, is not random
information-gathering. It is a structured inquiry that uses acceptable
scientific methodology to solve problems and create knowledge that is
valid, verifiable, and (ideally) generalizable. Every real research process must
contain certain qualities that make it rigorous and trustworthy.
The first essential quality is that research must be controlled. This means that
when studying the relationship between variables, we must minimize or
eliminate the influence of outside factors. In physical sciences, control is
easier because experiments often happen in laboratories. In the social
sciences, especially in education, perfect control is difficult because people
are involved—each with different emotions, histories, and environments.
Still, control remains a goal. Even if we cannot eliminate external factors
entirely, we must account for them and understand how they influence the
phenomena we are studying.
The second is that research must be rigorous. This refers to the thoroughness
with which the researcher designs, conducts, and analyzes the study.
Procedures must be justified, logical, and aligned with the goals of the
inquiry. In other words, you don’t just pick a method because it’s easy—you
use it because it fits your question, your variables, and your context.
Now that we’ve defined what research must be, we can examine its types,
which are classified from three different perspectives: the application of
findings, the objectives of the study, and the mode of enquiry.
Here, research is classified based on whether its findings are used to solve
real-world problems or to build theoretical understanding.
1. Pure Research (also called fundamental or basic research) is done for the
sake of knowledge itself. Its purpose is to test and refine theories, methods, or
ideas. It might not have any immediate practical use. For instance, a study on
how human memory stores information under stress may not help a teacher
today, but it builds knowledge that could inform educational psychology in
the future. Pure research deals with abstractions, conceptual problems, and
general principles. It is intellectually driven.
2. Applied Research, on the other hand, deals directly with practical
problems. It applies theories and methods developed through pure research to
real-life situations. In education, applied research might investigate whether a
specific teaching method improves reading skills among third-grade students.
The goal is not to build a general theory, but to solve a current problem or
improve practice. Applied research is what most teachers and policymakers
turn to when they want immediate guidance and that’s what we call action
research.
It’s important to know that pure and applied research are not opposites—they
often inform one another. Pure research may eventually lead to practical
applications, and applied research may raise theoretical questions that push
basic research further.
This third lens classifies research based on how the data is collected and how
flexible or structured the process is. There are two main types:
It’s important to note that qualitative and quantitative research are not
enemies. They are complementary. Many studies use a mixed methods
approach, which combines both. For example, you might start with
qualitative interviews to explore student attitudes, and then use that data to
build a structured questionnaire for a larger quantitative study.
Research Paradigms
There are two dominant paradigms in social science research. The first is the
positivist paradigm, also called the scientific or systematic approach. It
assumes that reality is objective and measurable. This paradigm fits well with
quantitative research. It emphasizes objectivity, control, and statistical
analysis.
These paradigms are not mutually exclusive. In fact, many researchers today
argue that the purpose of the study should determine the paradigm—not
personal preference or academic tradition. Sometimes, a positivist lens helps
answer the question. Other times, a naturalistic lens is better suited. The best
researchers learn to switch between paradigms depending on the nature of the
inquiry.
Final Reflection
This is the first and arguably the most critical step in the entire research
process. Before anything else, you need to identify what you are trying to
study. A research problem is not just a topic—it is a clearly defined question
or issue that needs investigation. There are two broad types of problems:
those dealing with the state of nature (descriptive or factual situations) and
those concerned with relationships between variables (causal or
correlational issues). The formulation involves narrowing down a broad idea
into a specific, researchable problem.
To do this effectively, you must fully understand the problem, often by
discussing it with colleagues or experts. Then, you rephrase it into
operational terms—language that is specific, measurable, and clear. This step
also includes defining any key terms or variables involved in the problem. A
well-defined research problem determines the direction of the entire study:
the data you’ll need, the methods you’ll use, and how you’ll analyze and
interpret your findings.
A research design is the blueprint of your study. It explains how you will
carry out your research, from selecting subjects to collecting and analyzing
data. A good research design ensures that you collect valid, reliable, and
relevant data using minimal resources (time, money, and effort).
Choosing the right sampling method is vital. Poor sampling leads to bias,
inaccurate results, and invalid conclusions. In some cases, mixed sampling
techniques may be used.
● Telephone interviews
● Mail questionnaires
The method you choose depends on the nature of your study, your resources,
the desired accuracy, and the population involved.
Execution means carrying out the research exactly as designed. This is where
planning becomes action. The researcher must ensure that data collection is
consistent, systematic, and of high quality. If using interviews or surveys,
interviewers need training, and field checks should be conducted to verify
accuracy and honesty.
9. Hypothesis Testing
Once the data is analyzed, the next step is to test whether your initial
hypothesis holds true. This is done using statistical tests such as:
Interpretation involves explaining what the results mean, why they matter,
and how they relate to existing knowledge. It also involves identifying new
questions or future areas of research. Good interpretation connects results to
theory, context, and application.
Final Thoughts
1. Philosophical worldviews
2. Research designs
3. Research methods
These components are not chosen randomly—they must cohere with one
another and with the nature of the research problem. The approach you
choose will shape how you view your data, how you ask your questions, how
you analyze what you find, and even what you consider a valid answer. For
example, someone trying to measure the impact of a specific drug will need a
highly controlled, quantitative approach. Someone studying how a group of
refugees rebuilds their identity through storytelling will need a qualitative,
open-ended, interpretive approach. So the selection of a research approach
isn’t just technical—it reflects your beliefs about reality, about how
knowledge is formed, and what kind of understanding you are aiming to
produce.
There are three dominant research approaches that every researcher must be
familiar with:
Qualitative Research
For example, imagine you want to study how young immigrant girls in
Morocco experience the transition to university. A qualitative approach might
involve in-depth interviews, participant observations, and even collecting
personal diaries. The point is not to generalize or predict—but to understand
their lived realities, in depth, from their own perspectives. Qualitative
researchers often work inductively, meaning they don’t start with a fixed
theory. Instead, they let the theory emerge from the data they gather, through
a process of reflection, pattern identification, and interpretation.
Quantitative Research
Let’s say you want to test whether students who attend private tutoring
centers perform better on standardized English tests than those who don’t.
You would formulate a hypothesis, select variables (e.g., hours of tutoring,
test scores), and measure them with instruments such as surveys or test
results. Your goal would be to use statistical analysis to either support or
reject your hypothesis.
Before you choose a method or design, you need to ask: what do I believe
about knowledge? This is where philosophical worldviews come in. These
are not optional—they are the foundational belief systems that influence
every research decision. There are four major worldviews that you need to
deeply understand:
Postpositivism
Postpositivists start with a theory, define testable variables, and use tools
like surveys, experiments, and statistical analysis to reduce complexity into
measurable parts. Their goal is to discover laws or causal relationships that
can be generalized. For example, you might ask: “Does access to Wi-Fi
increase student achievement in remote schools?” A postpositivist would try
to isolate variables and control other factors to test this.
Constructivism
Transformative
Pragmatism
Each research approach contains specific designs. These are like the
architectural plans for the building—they lay out the structure of how you’ll
collect and analyze your data.
Each design has its logic, rules, procedures, and applications. Choosing the
wrong design—or misunderstanding how to implement it—can destroy the
credibility of your research.
The methods refer to the actual tools and techniques used to gather data:
interviews, questionnaires, focus groups, statistical software, field notes,
coding procedures, etc. These are the instruments you use to bring your
research design to life. But they must always align with your worldview and
design. You can’t use a postpositivist mindset and apply open-ended
interviews without thinking through how they’ll be analyzed, interpreted, and
made rigorous.
Two of the most frequently used quantitative designs are surveys and
experiments:
1. Survey Research
For example, you might use a survey to investigate how Moroccan university
students perceive AI-based learning platforms. You’d create a questionnaire
with closed-ended questions (e.g., Likert scales), administer it to a
representative sample, and then analyze the responses statistically to identify
trends or differences across variables like gender, field of study, or level of
experience with technology.
For instance, you might want to test whether a new online grammar tool
improves writing accuracy in EFL students. You could randomly assign half
of a class to use the tool for a month, while the other half uses traditional
methods. Then, you compare their writing test scores. If the tool-users
perform significantly better, you may conclude that the tool is effective.
2. Phenomenological Research
Let’s say you want to study the experience of anxiety during public speaking
among EFL learners. You would interview multiple students, identify
common patterns in how they describe their anxiety, and distill those into a
rich description of what that experience is like across participants. The aim is
not explanation or comparison, but deep description of experience.
Grounded theory is used when you want to develop a new theory based on
data gathered directly from participants. It is especially useful when existing
theories don’t fully explain the process or behavior you’re studying.
4. Ethnography
5. Case Study
For example, you could do a case study on a single school that implemented a
bilingual education policy. You’d explore how it was planned, how teachers
were trained, how students responded, and what outcomes emerged.
Mixed methods research is not just about combining interviews with surveys.
It’s about integration—using both qualitative and quantitative elements to
provide a comprehensive understanding of a problem. The designs are
structured around the timing of data collection and the purpose of combining
the data.
1. Convergent Design
In a convergent design, you collect both types of data at the same time,
analyze them separately, and then merge the results. This helps you validate
findings across methods or see where they confirm or contradict one
another.
For example, you might survey teachers about their attitudes toward inclusive
education (quantitative) while also interviewing a few of them to understand
the reasons behind those attitudes (qualitative). You compare the results to
see how they complement each other.
Imagine you conduct a survey showing that female STEM students have
lower confidence in math than their male peers. You might then interview a
small number of these students to understand why they feel that way and
what social or educational factors contribute to it.
The opposite of the previous design, this one starts with qualitative
exploration, then uses quantitative tools to follow up. You might conduct
focus groups to explore student engagement, and then develop a survey
instrument based on the themes that emerged, which you distribute to a larger
sample.
SUMMARY OF DESIGNS
Each design has strengths and limitations. What matters is that your choice of
design fits:
A research problem is more than just a topic—it’s the specific issue, gap, or
need that your study seeks to address. This problem often emerges from a
close reading of the existing literature, where you notice something missing,
contested, or unexplored. But just identifying a problem isn’t enough. You
must also choose an approach that best allows you to investigate it in a valid
and meaningful way.
But what if your problem requires both kinds of insight? Let’s say you want
to study how teachers’ beliefs about inclusion affect their classroom
practices. You might start by surveying a large sample of teachers to identify
general attitudes, then follow up with interviews to understand how those
beliefs play out in real classrooms. In this case, you’re looking at both
breadth and depth—so a mixed methods approach is ideal.
Mixed methods are particularly useful when one type of data alone doesn’t
tell the full story, or when one phase of the research raises questions that
require a different method to answer. These designs are powerful, but they
also demand a lot of time, expertise, and planning.
Personal Experience
But it’s not only about comfort—it’s also about the integrity of the
research. If you’re doing a qualitative study, but you don’t know how to
conduct a proper interview or analyze themes in narrative data, you risk
misinterpreting your results. Likewise, if you’re running a quantitative
experiment but you don’t know how to control for confounding variables or
apply the right statistical tests, your findings may be invalid.
That’s why mixed methods research can be particularly challenging—it
demands a dual skill set. You need to be able to switch between numerical
precision and narrative interpretation. You need the patience to do twice the
amount of data collection and the analytical clarity to combine different kinds
of evidence into a coherent conclusion. Researchers who pursue mixed
methods must be methodologically bilingual.
The Audience
The key is to align your research with your audience’s values, standards,
and interests without compromising the integrity of your work. You’re not
only producing knowledge—you’re communicating it, and that means
understanding who your readers are and what they expect.
Final Thoughts
There is no single “correct” approach. The best researchers are those who
match their methods to their mission, who respect the complexity of their
topic, and who make these decisions consciously and coherently.
The chapter opens by reminding us that selecting a topic comes before the
literature itself. The researcher must first clarify what they are studying and
why it is meaningful. A topic should be written in simple, clear terms—like a
phrase or a working title—to provide continuous orientation during the
research design process. Something like “my study is about how Moroccan
high school teachers use digital tools in remote learning” might serve as a
preliminary anchor. The authors stress that novice researchers often try to
imitate overly complex or abstract writing they see in published journal
articles, which can cloud their initial thinking. In reality, the best research
topics start with straightforward, focused ideas. As the project develops,
these ideas will evolve into more complex structures, but they must begin
grounded and accessible.
A working title is useful not just for the researcher, but for communicating
the study’s direction to others—supervisors, colleagues, committees. The
authors recommend creating a brief phrase or even posing a direct question
such as, “What influences student motivation in online learning
environments?” These types of expressions offer both conceptual clarity and
focus. Importantly, before moving forward, the researcher must reflect on
whether the topic is researchable (can it realistically be studied with the
resources and access available?) and whether it should be researched (does it
contribute something meaningful to the field?). This is not a matter of
opinion—it must be evaluated through academic criteria.
Once the topic is confirmed as worth studying, the next task is to review the
relevant literature. The literature review serves several essential functions.
First, it demonstrates the researcher’s familiarity with previous studies on
the topic. This is critical not only for scholarly credibility but also to avoid
duplication, identify gaps, and build upon existing knowledge. Second, it
places the study within an ongoing academic dialogue, showing how it
contributes to, extends, or challenges what’s already known. Third, it helps
justify the importance of the study and provides a comparative framework
for analyzing and interpreting the eventual findings.
The way the literature is used depends heavily on the research approach. In
qualitative research, literature is used inductively. Since qualitative studies
often aim to explore new or poorly understood phenomena, they avoid
imposing theoretical frameworks too early. Instead, the literature might be
used sparingly in the introduction to justify the problem but will be expanded
later to compare findings. Sometimes in theory-driven qualitative
designs—like ethnography or critical theory—literature is introduced earlier
to establish a conceptual framework. In grounded theory, case studies, or
phenomenology, the literature is often held back until after findings emerge,
to allow participants’ voices and experiences to shape the analysis. The
chapter outlines three placement models for literature in qualitative studies:
At this point, you’re aiming to find around 50 relevant sources. You won’t
use all of them, but this is a good number to work with as you start
evaluating. As you read through the titles and abstracts, you begin to narrow
down to the most central and relevant studies—those that directly
contribute to your understanding of your own research problem. This requires
skimming, then deep reading, and evaluating how each article relates to
your topic, either by supporting, challenging, or expanding it.
As you gather sources, you are not simply making a list—you begin building
a literature map. A literature map is a visual diagram that shows the
relationship between different clusters of literature. It helps you organize
your review thematically. The top of the map might represent your broad
research topic (e.g., “student motivation”), then branches might break into
themes like “feedback and motivation,” “cultural context of learning,”
“gender differences in response to feedback,” and so on. These visual maps
are not included in every thesis, but they are essential tools for your own
thinking—they help you identify where your study will contribute something
new.
The next step is to begin writing summaries and abstracts of each important
article. These are not just for citations—they are short analytical notes that
record:
The goal is to create a database in your own mind of what has already been
studied, how it was studied, and where the gaps are. For instance, you might
notice that several studies examine feedback and motivation, but none focus
on Moroccan students, or none look at the impact of gender, or that all were
done in university contexts rather than high schools. These observations
justify the gap your study will fill.
1. Introduction – where you tell the reader how your literature review is
structured.
This model works very well because it aligns directly with your research
questions or hypotheses. It helps reviewers see that you have covered all
theoretical bases and that your study is both logical and needed.
● Historical context
● Cultural perspectives
● Methodological insights
For mixed methods research, the structure of the literature review depends
on the dominant strand. If your study begins with a quantitative phase, you
follow the variable-based model. If it begins with qualitative data collection,
you follow the thematic model. If your study is truly convergent, you will
need to blend both—possibly using parallel subsections.
ABSTRACTING STUDIES: HOW TO READ DEEPLY, THINK
CRITICALLY, AND RECORD STRATEGICALLY
For example, let’s say you read a quantitative study that tests whether daily
reading practice improves vocabulary retention in ESL learners. Your abstract
might include: the research problem (limited vocabulary acquisition in ESL
settings), the aim (to test whether daily reading improves outcomes), the
participants (120 Moroccan high school students), the methods (pre/post
vocabulary tests, control group, statistical analysis using t-tests), the findings
(students in the reading group showed significantly higher gains), and finally,
how this relates to your own study (perhaps you are studying writing fluency,
so this study supports the importance of literacy exposure, even if the focus is
slightly different).
As you write your literature review, it's not just about what you say, but how
you say it—and most importantly, how you cite your sources. Academic
writing follows strict guidelines to ensure that references are handled
ethically, consistently, and professionally. This is where style manuals
come in. These are official guides that tell you how to format your references,
how to structure your headings, how to handle quotations, and how to present
tables, figures, and in-text citations.
In the social sciences, the most commonly used manual is the APA Style
Manual (currently in its 7th edition). It is the gold standard for research in
education, psychology, and the broader social sciences. It governs everything
from how you write author names in the text (e.g., “Creswell & Plano Clark,
2018”), to how you structure your reference list, to how you write up tables,
footnotes, and even how you use bias-free language.
Why is using a style manual so important? First, it ensures that your reader
can easily locate your sources and verify your references. Second, it avoids
plagiarism—unintentional or otherwise—by ensuring every piece of
borrowed information is clearly documented. Third, it signals your academic
maturity: sloppy citations, inconsistent formatting, or invented styles suggest
inexperience and reduce your credibility.
The authors of the chapter recommend several specific practices: make sure
all in-text citations are properly reflected in your reference list; make sure
your headings follow the required levels (APA has five heading levels, each
with its own formatting); decide early on how many levels your project will
need; and be consistent throughout. Also, you must decide where to place
footnotes (though they are less used today), and pay close attention to how
tables and figures are labeled and formatted.
What’s important is not the shape, but the clarity of structure. A good
literature map shows:
Let’s say your topic is about “digital literacy and gender in Moroccan
universities.” Your top-level categories might be:
● Digital literacy
You might begin with 25 sources for a preliminary map. For a full thesis or
dissertation, you may be working with 100 or more. It takes time and
multiple drafts to get the structure right. You’ll need to ask: Does my map
clearly show where the conversation is, and what is missing? Do the
branches reflect actual patterns in the literature, or just categories I imposed
arbitrarily? Which part of the map does my study most directly link to—and
how?
The final parts of the chapter offer comprehensive advice for researchers
trying to manage which literature to use and how to structure it. The
authors propose a priority system:
The final section also walks you through the process of writing your review.
The key is to group sources thematically, write clear transitions, and avoid
listing studies like a laundry list. You should always write with a purpose: to
show patterns, debates, gaps, and justify the need for your research. This
review sets the stage for your method, your questions, and your theoretical
framework.
At the heart of any scholarly research project lies a core intellectual structure:
theory. Theory isn’t just a background idea or an optional element—it is a
central guiding force that influences everything from how we formulate our
questions to how we interpret our findings. This chapter begins by
establishing that the role of theory varies significantly depending on whether
your study is quantitative, qualitative, or mixed methods, and part of being
a skilled researcher is knowing how to use theory appropriately within your
chosen approach.
In qualitative research, the relationship with theory is far more flexible and
varied. In some cases—especially in grounded theory—the theory is not
imposed at the start but is instead generated as an outcome of the study. The
researcher goes into the field with an open mind, collects data, and lets
patterns, themes, and relationships emerge inductively. But in other
qualitative traditions, such as ethnography or critical theory-based
research, theory may play a more explicit role at the beginning, helping
shape what is observed, what questions are asked, and how the data is
analyzed. In social justice-oriented qualitative studies, theory serves as a
lens, one that guides the researcher to focus on power, marginalization, and
inequality. This lens is not neutral—it’s explicitly political and ethical,
aimed at transformation.
In mixed methods research, both roles are in play: researchers may begin
with a theory they want to test quantitatively, while also generating themes
and explanations qualitatively. Furthermore, mixed methods research might
use a theoretical framework—often drawn from social science or
participatory traditions—that integrates both quantitative and qualitative data
collection and interpretation. These frameworks can be disciplinary (like
economic or psychological theories) or critical (like feminist, racial, or
disability-based frameworks), and they give coherence to studies that are
often very complex in their design.
So, the chapter opens by saying: before you go into methods or data, you
must understand the theory—because theory affects everything. And what
follows is a detailed exploration of how that works in each paradigm.
That’s why true experiments are considered the gold standard in quantitative
research. Only by randomly assigning participants to groups (e.g., one
group drinks red wine, another doesn’t) can we control for these confounding
variables. Experiments give us control, which is necessary to make stronger
claims about causality.
Now that the idea of causality is clear, the chapter moves into discussing the
types of variables—because theory in quantitative research is built around
how variables relate to each other.
The first and most fundamental type is the independent variable. This is the
variable that the researcher believes is causing or influencing something
else. In experiments, it is the variable you manipulate. In our wine example,
daily red wine consumption is the independent variable—you’re changing it
to see what effect it has.
But it doesn’t end there. There are also two other kinds of variables that help
researchers understand complex relationships:
Understanding these types of variables is crucial for designing your study and
choosing the right theory. If your theory says X causes Y through M, then M
is a mediator. If your theory says X causes Y but only under certain
conditions, those conditions are moderators. Identifying these kinds of
variables helps you make more nuanced, accurate, and useful models.
Here’s the key: theory is not just an idea—it’s a logical, structured system
of propositions that explain how and why variables are connected. And
over time, as hypotheses are tested and retested, these theoretical systems
become stronger, more refined, and more reliable.
To build a strong quantitative study, you don’t just need a list of variables.
You need to explain why you believe these variables are related, and why
you think your independent variable will influence your dependent one. That
explanation comes from theory. But what exactly is a theory?
The chapter uses Kerlinger’s (1979) classic definition to ground this idea: a
theory is “a set of interrelated constructs (variables), definitions, and
propositions that presents a systematic view of phenomena by specifying
relations among variables, with the purpose of explaining natural
phenomena.” In other words, a theory is like a carefully built framework of
variables and ideas that describe how things work in the world and why they
work that way.
For instance, imagine a theory in education that explains why some students
are more motivated than others. The theory might say that autonomy,
competence, and relatedness are psychological needs, and that satisfying
these needs leads to higher motivation. This is not just a story—it’s a theory
with named variables, relationships among them, and an explanation for an
outcome. That’s what makes it testable.
The theory might appear in your study in many forms—it could be a section
in your literature review, a visual diagram, a set of hypotheses, or a separate
chapter titled Theoretical Framework. But in every case, its purpose is to tie
the variables together and justify why you expect a certain pattern or result.
But theories don’t just appear overnight—they are developed over time
through repeated testing. Researchers generate hypotheses (such as “high
isolation leads to high anxiety”), test them in different settings (colleges,
workplaces, different age groups), and gradually build a body of evidence.
When these results converge, the theory gains credibility. Eventually,
someone names it, publishes it, and others use it in new contexts. That’s how
theory becomes part of the field’s common knowledge.
Theories also vary in scope and level. Some are very narrow, explaining
only small patterns in specific situations—these are called micro-level
theories. For instance, Goffman’s theory of “face work” is a micro theory
that explains behavior in face-to-face interactions. Some theories are
broader—meso-level theories—that apply to communities, institutions, or
organizations. Others, like macro-level theories, apply to entire societies or
systems. For example, Lenski’s theory of social stratification explains how
surplus production affects the organization of society. Knowing the scope of
your theory helps you judge whether it’s appropriate for your research focus.
The first point made here is that researchers don’t always express theory the
same way. Depending on the tradition they come from, the kind of research
they’re doing, and their own style, they might write theory as a network of
hypotheses, as a narrative argument using cause-effect logic, or as a
diagram or visual model. All three are legitimate, and in many studies,
researchers combine them for clarity.
The chapter gives an example from Hopkins (1964), who articulated a theory
of influence processes using 15 separate hypotheses. The structure is
elegant and tight—each hypothesis builds logically on the others, and the
entire collection becomes a system of interrelated propositions.
This form is very efficient and makes the theory easy to test. However, it can
be dense and technical, especially for readers who aren’t experts. That’s
why some researchers supplement it with narrative explanations or
diagrams.
This kind of statement is powerful because it not only tells us that two
variables are related, but how the relationship works. The logic is
symmetrical, reversible, and testable. It also opens the door to mediating
and moderating conditions—for example, we could ask: “Does this hold true
only in certain cultural settings?” or “Is there a point where more interaction
reduces liking?”
What’s important here is that the if-then format lays out the underlying
assumptions of the theory clearly. It also helps you translate theory into
hypotheses, because hypotheses are often just specific if-then statements
with measurable variables attached.
So, the theory gives us the logic, and the hypothesis gives us the testable
prediction.
When you use theory in a quantitative study, you’re not just including it for
decoration—you’re using it deductively. That means the entire logic of your
research flows from the theory. You begin by stating a theory, then you
develop hypotheses or research questions from it, and then you collect and
analyze data to test those predictions. So where should that theory be
placed?
The general rule is: place the theory early in your research plan. That means
it should appear before you present your hypotheses or research questions,
because the theory is what justifies and explains why you are asking those
questions. Some researchers place it in the introduction, others in the
literature review, and some in a separate section titled "Theoretical
Framework" or "Theoretical Perspective." The chapter recommends this last
option—using a clearly labeled, stand-alone section—because it gives the
reader a chance to clearly distinguish the theory from the background
literature, variables, or hypotheses. In formal academic proposals, this makes
your theoretical thinking transparent and focused.
Now, think of this in terms of structure. You might write your introduction by
first describing the problem, followed by a brief review of relevant literature,
and then immediately after that, introduce your theoretical perspective. This
becomes your organizing framework—it tells your reader what to expect,
what variables are involved, and why you believe these variables are
connected. The theory becomes the backbone of your argument, helping
you transition smoothly into hypotheses, variable definitions, and
measurement tools.
To visualize this process, the chapter presents Figure 3.4, which outlines the
deductive approach. It starts with a theory, breaks that theory down into
hypotheses or research questions, defines the variables, then finds or builds
instruments to measure those variables, and finally collects and analyzes data
to confirm or refute the theory. This flow is linear, logical, and testable—the
defining characteristic of quantitative research.
In summary: In quantitative studies, you use theory to build a case. And you
must state it clearly, early, and explicitly—either in the introduction, right
after the research questions, or in a standalone theoretical section. That theory
sets up everything that follows.
Now that we know where the theory goes, how do you actually write it?
1. Identify the Theory: Start by naming the theory you will use. For
example: “The theory that I will use is social learning theory.”
2. Give the Origin: Explain where the theory comes from, who
developed it, and in what context. This shows you’ve done your
background reading. Example: “It was developed by Albert Bandura
and used to study human learning in educational and clinical contexts.”
4. Connect the Theory to Your Variables: This is where you apply the
theory to your own study. You must show how the theory justifies the
relationship between your independent and dependent variables. This is
the "rainbow bridge"—how X explains or predicts Y.
“The theory that I will use is [name the theory]. It was developed
by [origin/source], and it was used to study [topic]. This theory
indicates that [propositions/hypotheses]. As applied to my study,
this theory holds that I would expect my independent variable(s) [X
variables] to influence or explain the dependent variable(s) [Y
variables] because [rationale based on the logic of the theory].”
This script can be adapted to any theory and research question. It forces you
to be clear, disciplined, and precise. It also ensures that your study is
anchored in the larger academic conversation.
The chapter then provides an example from a real dissertation written by
Crutchfield (1986). Her study looked at whether locus of control and
interpersonal trust affected scholarly productivity among nursing faculty.
She used social learning theory as her theoretical framework. In her writing,
she:
● Named the theory and gave its background (Bandura, Rotter, etc.)
The first way theory enters qualitative research is similar to its use in
quantitative work: as a broad explanation that helps the researcher
understand behavior or structure the study. For example, ethnographers,
who study cultures and communities in their natural settings, often use
cultural theories to frame their inquiries. These theories might focus on
social organization, family structures, language practices, gender roles,
rituals, or systems of power.
The chapter gives the example of health science research, where researchers
often begin with conceptual models related to health behavior—like the
theory of planned behavior, health belief model, or quality of life
frameworks. These are not hypotheses to be tested as in quantitative work,
but orienting concepts that guide what the researcher observes, asks, and
analyzes.
So in this first model, theory provides a ready-made set of ideas or
patterns—a sort of intellectual map—that researchers use to explore
complex cultural or social phenomena. While these might not always be
labeled “theories” in the formal sense, they serve that role.
Here we come to one of the most powerful and politically engaged uses of
theory: the theoretical lens or transformative perspective. This approach
gained strength in the 1980s and 1990s when researchers from feminist,
critical race, queer, disability, and decolonial traditions began using
qualitative research to challenge power structures and give voice to
marginalized groups.
In this model, theory doesn’t just explain the world—it questions it,
interrogates it, and aims to transform it. Researchers who use these lenses
are not “neutral observers.” They position themselves as active, engaged
scholars who are asking political and ethical questions: Who has power?
Who gets to speak? Who has been silenced? What systems sustain
inequality?
When researchers adopt one of these lenses, it affects every part of their
study: the questions they ask, how they collect data (often collaboratively),
how they analyze it (often through narratives, stories, or themes), and how
they write their results (often alongside participants, rather than about
them). The goal is not just understanding, but empowerment and social
change.
Here, the researcher begins with no fixed theory, enters the field with
open-ended questions, collects data, and gradually builds themes, categories,
and patterns. From these, a theory is developed. This is the inductive logic
of qualitative research: from data to patterns to generalizations.
Finally, there are qualitative studies that do not use any explicit theory at all.
This doesn’t mean they are “atheoretical”—every researcher comes with
assumptions—but in these studies, the goal is often to provide a rich,
textured, descriptive account of a phenomenon without framing it in
theoretical terms.
These are not “types” of theory in the abstract; they are ways of embedding
theory throughout the design, data collection, analysis, and interpretation
stages of a study. Let’s unpack each one in full.
For example, Creswell and Plano Clark (2011) describe how theory in this
model should:
One example discussed in the chapter is Kennett et al. (2008), who studied
chronic pain management using Rosenbaum’s model of learned
resourcefulness. This is a cognitive-behavioral theory that explains how
individuals cope with stress and health challenges. The researchers used a
quantitative instrument (Rosenbaum’s Self-Control Schedule) to measure
resourcefulness and also conducted qualitative interviews to explore how
patients experience pain management. The theory appeared at the beginning
of the article and was used to generate questions, analyze patterns, and
explain findings across both methods. The authors also proposed a visual
model at the end of the study showing which aspects of the theory were
supported.
● Produce findings that are useful to the community, not just academia;
● In the problem statement, it asks: Does this problem emerge from the
community itself?
Mertens (2003, 2009, 2010) lays out specific criteria for how to embed this
framework into mixed methods research. These include ten reflective
questions, such as:
The chapter includes Box 3.1, which operationalizes these criteria in relation
to every phase of the research process—from defining the problem to
collecting data to reporting results.
In their review of studies using this approach, Sweetman et al. (2010) found
that while many researchers adopted elements of the transformative
paradigm, few made it explicit. This suggests that the field is still developing
its standards, and that researchers interested in using this framework need to
be especially deliberate and reflective.
To see this approach in action, the chapter presents Hodgkin’s (2008) study,
which used a feminist lens in a mixed methods study of social capital in
Australia. Hodgkin highlighted the absence of gendered perspectives in
traditional studies of social capital, and designed her study to give voice to
women’s experiences. Her research questions were guided by feminist
theory; her quantitative and qualitative findings were integrated through that
lens; and her interpretations focused on women’s social participation,
isolation, and identity. By doing so, Hodgkin demonstrated that theory isn’t
just a conceptual tool—it is a political and ethical stance.
The chapter ends by tying together everything we’ve learned. Whether you
are conducting quantitative, qualitative, or mixed methods research, theory
plays an indispensable role. It can:
● Explain or predict relationships between variables;
● Offer a lens for seeing the world through issues of power and identity;
● And help researchers take responsibility for the social impact of their
work.
But before even getting to those elements, the authors make one thing clear:
the introduction must clearly identify the research problem early on. This
is not an abstract idea or a topic—it is a specific issue, concern, controversy,
or gap in understanding that the study aims to address.
The authors advise that the first paragraph of a good introduction should
engage the reader by showing that the topic is important and timely, and
that it touches a real issue in education, psychology, health, or whatever your
field is. This part is often referred to as the “hook”, not in the sense of catchy
writing, but as a statement of urgency or need. Why should anyone care
about this study? What’s at stake if the problem goes unstudied?
In the next key part of the introduction, the researcher defines the problem
that the study seeks to investigate. This is not simply an area of interest or a
topic the researcher likes—it is a deficiency in what is currently known in the
field.
Creswell and Creswell refer here to the work of social scientists like Babbie
(1990) and others who emphasized that the research problem must meet
several criteria:
To build this, the researcher must synthesize what the literature already says.
What do we know about this issue? What have past studies found? Where do
they agree or conflict? But more importantly: what is still missing? This is
the core of the problem statement.
Here, we’re not just naming a topic—we are identifying a specific gap. And
that gap becomes the reason the study exists.
Each of these pieces builds toward the purpose. You don’t start with your
research questions. You start by justifying the need for your questions. This
is what the introduction is for.
Creswell and Creswell explain that this section should name audiences
explicitly: educators, policymakers, scholars, practitioners, students,
institutions, etc. It should also explain how the findings might be used: to
improve practices, inform policy, advance theory, or open up new lines of
research.
For example:
“The findings of this study could provide teachers and curriculum
designers with culturally sensitive strategies to boost EFL
motivation in rural Moroccan classrooms, thereby addressing
ongoing disparities in language learning outcomes.”
This tells the reader: this isn’t just academic busywork. This study has
real-world impact.
The final part of the introduction is where the researcher introduces the
purpose statement—which becomes the subject of Chapter 6 (and which
we already studied in detail). But here, in Chapter 5, Creswell and Creswell
emphasize that the transition to the purpose statement must be natural and
coherent. It must emerge logically from the previous discussion of the
problem, literature, deficiencies, and significance.
In other words: you don’t just dump a purpose statement on the reader out of
nowhere. You build up to it, layer by layer, so that when the reader arrives at
the sentence beginning with “The purpose of this study is…,” they’re already
primed to understand what’s being proposed and why it matters.
This statement tells the reader exactly what kind of study is being done,
and it reflects the philosophical commitments of qualitative research:
subjectivity, context, complexity, and meaning.
This purpose statement sets the stage for hypotheses, variables, instruments,
and statistical analysis. It reflects predictive logic and the values of
measurement, control, and generalization that define quantitative research.
● The reason for mixing (e.g., to triangulate findings, to build from one
phase to another),
The authors begin this section by emphasizing that the language you use in
the purpose statement must match the logic and assumptions of your
chosen research approach. This might seem obvious, but it’s where many
student researchers go wrong. If you’re conducting qualitative research but
use words like “impact” or “effect,” your language is betraying a positivist
assumption. If you’re doing quantitative work but avoid stating your
variables or predicted relationships, you may seem vague or unscientific.
● The intent of the study (using verbs like explore, understand, describe),
For example:
● The participants,
● The setting,
● And optionally, the theory being tested.
Example:
This is clear, concise, and reflects a strong alignment with quantitative logic.
Purpose Statement Template: Mixed Methods Research
The mixed methods purpose statement is the most complex because it must
integrate the logic, goals, and procedures of both approaches. The purpose
statement must:
● State the intent of the study (why mixing methods adds value),
Example:
“The purpose of this explanatory sequential mixed methods study is
to examine the relationship between online feedback and writing
performance among Moroccan university students through a
survey, followed by interviews with students to explore their
perceptions of teacher feedback in more depth. The rationale for
using mixed methods is to explain the survey results with
qualitative insights.”
That is exactly what a mixed methods purpose statement should do: reflect
methodological integration and a clear rationale.
● And logically tied to the paradigm and design you are using.
In this section of Chapter 7, Creswell shifts from the open, exploratory nature
of qualitative research to the precise, predictive, and testable logic of
quantitative inquiry. Unlike qualitative research questions—which focus on
meanings, experiences, or processes—quantitative research questions are
formulated to measure variables, test relationships, or compare groups.
They must be stated in a way that reflects the deductive, postpositivist
worldview that underpins most quantitative research designs.
The authors begin by explaining that in quantitative research, you can express
your study’s aims in three different but related forms: (1) research
questions, (2) hypotheses, and/or (3) objectives. While these serve
overlapping functions, the focus in this chapter is primarily on questions and
hypotheses, which are the most common format in formal research studies
and dissertations.
Creswell and Creswell also stress that these questions should avoid vague
terms like “effectiveness” or “impact” unless those terms are clearly defined
and measurable. For example, instead of saying, “What is the impact of
online teaching?” you should ask, “What is the difference in test scores
between students who receive online instruction and those who receive
in-person instruction in grammar courses?”
Creswell and Creswell recommend always stating both the null and the
alternative hypotheses, particularly in studies that involve inferential
statistical tests, such as t-tests, ANOVA, or regression analysis. They also
suggest that these statements should mirror the structure and directionality
of your study’s theoretical claims. That means if your theory suggests that
more app usage will increase scores, then your hypothesis should be
directional—that is, specifying positive or negative relationships.
Moreover, the authors stress that your hypotheses must align precisely with
the variables you defined earlier in your purpose statement and theoretical
framework. There should be no surprise variables introduced in your
questions or hypotheses that weren’t already explained. This ensures logical
consistency across your research design.
What’s critical in this part of Chapter 7 is that Creswell and Creswell are not
just giving language formulas—they are giving you the logic of research
design. They are showing that hypotheses are not guesses; they are logical,
testable claims derived from theory, aligned with your purpose, and made
explicit through variables you can measure.
In the final part of Chapter 7, Creswell and Creswell turn their attention to
what is perhaps the most demanding task in question formulation: writing
mixed methods research questions. What makes this task especially
complex is that mixed methods research is not just a combination of
qualitative and quantitative methods—it is a philosophically and
methodologically integrated approach. Therefore, the questions must not
only reflect both traditions—they must also be carefully sequenced, aligned,
and justified within the study’s overall design.
The authors begin by acknowledging that while most researchers are
accustomed to writing separate qualitative and quantitative questions, few
are trained in writing an explicit mixed methods research question. This is
a relatively new but essential skill because it signals that the study is truly
mixed—not just in methods, but in logic, structure, and purpose. A mixed
methods research question helps show how the two strands will be
integrated, what order they will appear in, and why the researcher has
chosen to combine them at all.
Creswell and Creswell present three kinds of questions that should appear in
a well-developed mixed methods study:
This third question is what sets a true mixed methods study apart from a
study that just uses two sets of methods side-by-side. The mixed methods
question reflects the core intent of integration—it captures the study’s
rationale in question form. It tells the reader: How will the quantitative and
qualitative data relate to each other? What will the combination accomplish
that each method alone could not?
Let’s take a closer look at how Creswell and Creswell suggest these mixed
methods questions be written.
First, they emphasize that your design type should guide your question
structure. For example, if you are using an explanatory sequential design
(quantitative followed by qualitative), your mixed methods question might
be:
The chapter also notes that when the mixed methods question is omitted, it
often signals that the study lacks methodological integration—that the
mixing is superficial or procedural, rather than conceptual. Therefore,
including a mixed methods research question is not only good practice—it is
a sign of design coherence and scholarly maturity.
Creswell and Creswell encourage the use of visual figures and explicit
explanation (such as rationale, priority, timing) alongside the mixed methods
question to further clarify the design. This aligns with their broader guidance
from earlier chapters (especially Chapter 10 on mixed methods procedures)
where the complexity of these studies is best communicated through both
narrative and visual tools.
Finally, the authors remind readers that writing strong mixed methods
questions requires deep familiarity with both paradigms. The questions must
not conflict with the worldview or assumptions of either method. For
example, you should not ask a qualitative question that implies variables or
measurement, nor a quantitative question that includes subjective language or
undefined terms. The mixed methods question must bridge both, without
undermining either.
1. Methods-Based Variations
2. Theory-Use-Based Variations
3. Hybrid Variations
1. METHODS-BASED VARIATIONS
Why it matters:
This is the clearest, most common, most “teachable” approach, and it makes
your study coherent. It ensures that your purpose statement, your research
questions, and your design all match each other. So, for example:
Example:
2. THEORY-USE-BASED VARIATIONS
This second variation shifts the organizing force from method to theory.
Here, the way you write your questions or hypotheses is shaped more by the
theoretical framework you are working within than by whether your study
is qual, quant, or mixed.
What Creswell and Creswell are saying here is that sometimes, theory
dominates the research logic. So, your questions might directly reflect the
assumptions or propositions of the theory—even if that makes your question
structure deviate from what’s typically expected for that method.
Why it matters:
Example:
Here, even if your method is qualitative and case-based, your questions are
shaped by theory. The voice of the theory becomes dominant, and the
structure of your inquiry reflects its priorities.
3. HYBRID VARIATIONS
This third variation is where things get more flexible—and possibly more
creative. Creswell and Creswell refer to “hybrid variations” as combinations
of styles. These might appear in studies where researchers write both
research questions and hypotheses (even in a single design), or where they
mix formats depending on their needs.
Why it matters:
Sometimes you need to do this. Maybe your study explores how teachers feel
about a new pedagogical method (qual), but also measures how student
grades change (quant). You might have a central qualitative question and
also include a quantitative hypothesis. This doesn’t make your study
incoherent—it makes it hybrid. As long as the rationale is clear and justified,
Creswell and Creswell support this kind of approach.
Example:
In short:
The chapter clarifies this difference through a simple yet effective contrast: if
you are testing whether children who play violent video games are more
likely to engage in playground aggression, you're asking a correlational
question. A survey method could help you explore this relationship by
measuring both variables and seeing if there's a pattern. But if you're testing
whether playing violent video games causes aggression, then you need a true
experimental design, where you can randomly assign participants and
control variables to isolate causal effects. This distinction between
association and causation is fundamental in quantitative research
design—and it dictates whether a survey or experiment is the right tool for
the job.
Further, the design should specify whether the survey is cross-sectional (data
collected at one point) or longitudinal (data collected over time). The
researcher must also explain the mode of delivery—for example, will the
survey be mailed, emailed, administered by phone, or conducted
face-to-face? Each has trade-offs. Online surveys may save time but exclude
participants without internet access. In any case, the researcher is expected to
defend their choice with evidence from existing literature on survey
methodology (e.g., Fowler 2014, Fink 2016).
Then comes sampling design, which includes whether the sample is selected
in a single stage or multistage (clustering). A single-stage design means that
the researcher directly samples individuals from a known list. A multistage
design, on the other hand, might first sample organizations (like schools) and
then sample individuals within those clusters. This is especially useful in
large or hard-to-reach populations.
The author also addresses stratified sampling, which ensures that certain
subgroups (e.g., gender, ethnicity) are proportionally represented. This is
important when the population has known subgroups that could affect the
outcome. Stratification ensures fairness and accuracy in drawing conclusions.
Finally, they stress the need for power analysis to determine appropriate
sample size. This is not just a mathematical detail—it’s an ethical and
methodological necessity. Too small a sample and your results might not be
statistically reliable; too large and you're wasting resources. The power
analysis requires input like expected effect size, desired alpha level (Type I
error), and beta level (Type II error), and can be performed using tools like
G*Power. Creswell walk the reader through an example using burnout
among nurses, showing how these numbers are derived and what they mean.
Once the population and sampling decisions have been made, the researcher
must turn to the critical issue of instrumentation—that is, what tools or
devices will be used to actually measure the variables in the study. In this
section, Creswell emphasize that the choice of instruments is never casual or
intuitive—it must be explicit, justifiable, and aligned with the research
questions and the theory guiding the study. A variable only becomes real in a
quantitative study when it is operationalized through a valid instrument.
Following their in-depth treatment of surveys, the chapter moves into the
domain of experimental research, which Creswell and Creswell describe as
the most rigorous form of quantitative inquiry for testing causal
relationships. An experiment is defined as a design in which an independent
variable is manipulated to determine its effect on a dependent variable,
under conditions of high control. The key here is the idea of manipulation
and random assignment—which separates true experiments from other
forms of research like surveys or observational studies.
ROXO
ROO
This means that both groups are measured before and after, but only one
receives the treatment.
The author emphasize that internal validity—the degree to which the results
can be attributed to the treatment rather than other factors—is the holy grail
of experimental research. But internal validity is constantly under threat from
potential confounds: other variables that could be influencing the outcome.
For instance, what if students in the control group had a substitute teacher
while the treatment group had their regular teacher? That’s a confound.
Experimental designs must anticipate and minimize such threats.
They also highlight external validity, which refers to how well the results
generalize beyond the study’s setting and participants. Sometimes, increasing
internal validity (through tighter control) comes at the cost of generalizability.
Researchers must balance these competing priorities.
Toward the end of the chapter, Creswell and Creswell summarize how all of
this translates into a well-written method section for a quantitative research
proposal. This section should be written in past tense if the study has been
completed, or future tense if it's still being proposed. The structure of this
section should be clean and predictable. They recommend using clear
headings for each subsection: research design, population and sample,
instrumentation, data collection procedures, and data analysis plan.
In writing about data analysis, for example, it’s not enough to say “data were
analyzed statistically.” You must name the statistical tests, explain what they
were used to test, and indicate what software (e.g., SPSS, R) was used. You
should also explain how you dealt with missing data, outliers, or violations of
statistical assumptions.
From the outset, the authors insist that the qualitative method section must
clarify several essential components. These include: the intent of the
qualitative design; the specific strategy of inquiry (like case study,
ethnography, or phenomenology); the researcher’s positionality and
reflexivity (their personal role in the study); the types of data sources being
used; the procedures for recording data; the methods of analysis; and
finally, the approaches used to ensure the accuracy and trustworthiness
of the findings. These steps are not arbitrary; they reflect decades of
qualitative traditions, and each has philosophical significance. A proposal or
final report must make each one explicit, so that the reader can follow not
only what was done, but why it was done.
The chapter moves into a foundational section that outlines the defining
characteristics of qualitative research. These are not optional features—they
are the core ingredients that distinguish qualitative work. The authors list
several key traits:
First, qualitative research takes place in the natural setting. This is perhaps
the most sacred tenet of qualitative inquiry. Rather than isolating variables in
a lab or sending out a cold instrument, qualitative researchers go to the field.
They gather data where life actually unfolds. This commitment to real-life
contexts allows the researcher to see behavior, language, and interaction as
they happen, not in artificial conditions.
Sixth, the design is emergent. You don’t fully plan a qualitative study before
you start. The questions may shift. The participants may change. The
researcher must adapt, letting the research evolve as they engage more deeply
with the field.
The next section of the chapter turns to designs—the specific strategy or type
of qualitative approach the researcher uses. Creswell and Creswell build on
the earlier chapters to explain that while dozens of designs exist (as shown in
Tesch’s list of 28 or Wolcott’s tree of 22), there are five central approaches
they recommend for social and health sciences: narrative research,
phenomenology, ethnography, grounded theory, and case study. Each of
these designs has its own roots in the social sciences and its own logic of
data collection, analysis, and presentation.
The authors advise that a good methods section should clearly name the
design, explain its origin and definition, justify why it fits the research
purpose, and show how it will shape all other aspects of the study—from the
title, to the data collection, to the way the results are written.
This is why every qualitative methods section must include a statement about
the researcher’s role, sometimes called a positionality statement. This is
not a confessional or personal anecdote—it is an academic disclosure of how
the researcher is situated in relation to the topic, the participants, and
the setting, and how that positioning might influence data collection,
interpretation, and even access to the field.
They also recommend stating what access you have to the site or participants,
how your relationships might shape the dynamics of interviews or
observations, and how you plan to manage your role ethically—especially if
you're studying a setting you're personally connected to.
There are several types of purposeful sampling that Creswell and Creswell
mention or imply:
Next, Creswell and Creswell turn to the types of data collected. The core
sources include:
Each data type has its own logic and strengths, and a good methods section
must not only say what data will be collected, but why—that is, how each
source contributes to answering the research question.
The authors emphasize that qualitative data are usually collected using
protocols or field notes, and these tools must be described in the method
section. For interviews, this might mean describing the interview
protocol—a set of guiding questions (not a strict questionnaire). For
observations, this might mean describing what will be observed, how notes
will be taken, and what observer role will be adopted. Creswell and
Creswell point out that these tools must be piloted and refined, just like
instruments in quantitative research.
This next section of the chapter is a dense and layered roadmap to the process
of analyzing qualitative data. Creswell and Creswell describe qualitative
data analysis as spiral-like, not linear. You do not move from point A to
point B in a straight line. Instead, you circle back repeatedly, refining codes,
testing themes, and re-reading texts. The process is iterative, reflexive, and
emergent—new ideas surface as you work, and the researcher must be
willing to adapt and deepen the analysis continuously.
2. Reading through all the data: Before coding anything, the researcher
must immerse themselves in the material. This step is about getting a
sense of the whole, reading slowly, reflectively, and letting initial
impressions or “sensings” emerge.
The authors use the term validation broadly, borrowing from Lincoln and
Guba’s foundational concept of trustworthiness, which includes several
overlapping criteria: credibility, transferability, dependability, and
confirmability. These are the qualitative counterparts to the quantitative
ideas of internal validity, external validity, reliability, and objectivity.
Creswell and Creswell present eight key strategies for ensuring validation in
qualitative research. Though they do not expect every study to use all eight,
they emphasize that a strong qualitative proposal should use at least two or
more, and clearly explain them in the methods section.
The first and most widely respected strategy is triangulation. This involves
using multiple data sources, researchers, theories, or methods to confirm
findings. If several different kinds of evidence point to the same conclusion,
it strengthens the claim.
The second is member checking. This means going back to the participants
with the findings—whether in summary, full transcripts, or themes—and
asking: Does this reflect your experience? This is a powerful way to ensure
that the interpretation honors participants’ own voices and avoids
misrepresentation.
The third is the use of rich, thick description. This refers to writing that
paints a full, vivid picture of the setting, the people, the actions, and the
emotions involved. It allows the reader to enter the world of the participants
and judge for themselves whether the findings feel trustworthy or relatable.
And finally, the eighth strategy is the use of an external audit. This is a
formal process where an independent scholar reviews your entire
project—from data collection to analysis to interpretation—and evaluates
whether the process was rigorous, ethical, and consistent.
Creswell and Creswell also discuss the use of figures, visuals, and matrices
in presenting qualitative findings. While qualitative work is narrative, it still
benefits from visual aids—theme maps, coding trees, process diagrams—that
help organize complex ideas. These tools also make your analysis more
transparent to the reader.
Finally, the authors encourage researchers to write in a way that reflects their
design tradition. A narrative study might present findings as a story. A
grounded theory study might use a process model. A case study might
organize results by case and subcase. A phenomenological study might
begin with a thick description of the experience and then move toward the
distilled essence. Each design has its own rhetorical style, and the writing
must align with it.
The rigor of qualitative research lies not in control and measurement, but in
reflexivity, depth, and clarity—in making your process visible, your
reasoning logical, and your interpretations both faithful and critical. Creswell
and Creswell give you not only the roadmap for this, but the language and
structure you need to write and defend your own work.
Why We Mix Methods and What This Chapter Does
The chapter begins by introducing the core idea of “mixing” as not merely
the coexistence of quantitative and qualitative data, but as the integration of
both to gain deeper insights into a research problem. Creswell and Creswell
emphasize that both forms of data offer distinct kinds of knowledge:
qualitative data provides detailed, open-ended insights into human
experiences, while quantitative data offers standardized, measurable variables
suitable for generalization. When used together, they allow the researcher to
overcome the limitations of each approach by leveraging their
complementary strengths. The concept of mixing isn't random or stylistic—it
is methodological. It is part of a research tradition that has its own theories,
assumptions, procedures, and justifications. Thus, writing a mixed methods
procedure section means doing more than saying, “I’ll use both types of
data.” It means designing the study in a way that intentionally, logically, and
rigorously integrates the two into a unified structure.
This is why the authors stress that a mixed methods procedure section should
begin by explaining what mixed methods research is, why it is being used,
what design is chosen, and how every element of the study—from data
collection to analysis to interpretation—will be shaped by this methodology.
This is not a loose mixing of techniques but a structured and well-theorized
research strategy.
The next section explains that mixed methods research is not simply a tool or
a technique—it is a methodology, with a rich history, theoretical grounding,
and set of procedures. The authors trace its development from the late 1980s
and early 1990s to its current status as a recognized methodology supported
by major journals, textbooks, and academic communities. They cite
important milestones such as the Handbook of Mixed Methods (Tashakkori &
Teddlie), and they highlight its growing presence in disciplines like
education, health sciences, public policy, and social work.
4. The design structure that dictates how and when this mixing occurs;
5. And the worldview or theory that supports the need for integration.
The authors also suggest explaining the terminology used. While many
phrases exist (multimethod, integrated, mixed methodology), the academic
field has mostly converged around the term mixed methods.
Creswell and Creswell now turn to the reasons for choosing mixed methods.
At the general level, researchers may choose it to benefit from the
strengths of both approaches—like adding statistical weight to human
stories, or adding human meaning to numerical findings. At a more practical
level, mixed methods is suited to complex research questions that require
multiple forms of evidence. For example, you might use statistics to show a
pattern, but interviews to explain why that pattern exists.
They then list several specific scenarios where mixed methods is especially
appropriate:
In each of these cases, mixed methods is not used just for show—it is used to
answer research questions more fully than either method could do alone.
The convergent design is efficient and works well when the researcher has
equal expertise in both methods. However, it does demand the capacity to
analyze two types of data simultaneously and the ability to resolve
differences if findings conflict. It also requires sufficient sample size and
data quality on both sides to be credible.
Creswell and Creswell note that this design is especially powerful when
quantitative results raise questions that need human insight to answer. For
instance, a survey might reveal that male students report significantly lower
motivation than female students in English language classes. But the numbers
alone don’t explain why. A second qualitative phase might include interviews
with male students to explore what disengages them, what they value, or
how they perceive classroom dynamics.
Creswell and Creswell point out that this design is particularly useful when
little is known about a population or process. For example, if researchers
want to understand the barriers faced by rural Moroccan students when
accessing online education, they may first conduct interviews to explore
themes such as internet access, language challenges, or family support.
Those themes can then be operationalized into variables, which can be used
to construct a survey or assessment tool for broader quantitative testing.
The strength of this design lies in its developmental power: it allows
researchers to build culturally appropriate, context-sensitive, and
empirically grounded quantitative instruments. But the design also has
limitations—it can be time-consuming, especially when building and
validating new instruments. It also requires strong qualitative skills upfront
and the capacity to translate open-ended insights into structured,
measurable variables.
The key to this design, Creswell and Creswell emphasize, is that the two
strands are not equal in priority. The embedded method exists in service to
the larger design, and the timing of data collection is often simultaneous
or sequential, depending on the research question. Integration often occurs
during interpretation, where the embedded findings are used to clarify or
qualify the primary findings.
In this design, qualitative and quantitative methods are both mobilized in the
service of challenging inequality, amplifying excluded perspectives, and
producing actionable knowledge. The transformative design might take the
form of any of the core mixed methods models (convergent, sequential, etc.),
but it is distinguished by its intent: the researcher is committed not just to
understanding the world but to changing it.
For instance, a researcher might conduct a convergent mixed methods study
on language policy in schools, collecting survey data from administrators and
interviews with students from minoritized backgrounds. The goal would be
not just to explore patterns and experiences, but to make those findings
visible to policymakers, interrupt dominant narratives, and advocate for
change in language education.
Creswell and Creswell stress that researchers using this design must be
explicit about their theoretical lens, their ethical commitments, and their
plans for dissemination and action. This design often requires community
partnerships, collaborative research strategies, and reflexive
accountability throughout the research process. It is the most politically
conscious form of mixed methods research.
For example, a researcher might run a controlled trial comparing the effects
of a mindfulness program on student anxiety levels. The primary data comes
from standardized scales, analyzed statistically. However, the researcher also
interviews participants to learn how they perceived the program, what
challenges they faced, and how they applied what they learned in their daily
lives. The qualitative data provides depth and narrative richness, revealing
why the program worked better for some students than others.
Creswell and Creswell highlight that integration often occurs after the
intervention, during interpretation, though it may also influence design
adjustments during the study. This design strengthens both internal validity
(through control and randomization) and ecological validity (through
contextual qualitative insights). It is especially powerful for evaluation
research, where understanding why an intervention works is as important as
whether it works.
Finally, the case study mixed methods design uses mixed methods strategies
to explore a bounded system—a single case or multiple cases—by
integrating both qualitative and quantitative data to understand its
complexity from multiple angles. This design allows for rich description of
the case’s internal dynamics while also supporting comparative or
generalizable conclusions.
A mixed methods case study might, for instance, examine how a single high
school implements inclusive education policies. The researcher could collect
qualitative data through interviews, document analysis, and classroom
observations to understand how inclusion is practiced, while also collecting
quantitative data such as performance metrics or attendance rates to
measure student outcomes.
The emphasis here is on deep immersion in a specific case, but with the
added value of data integration across paradigms. The goal is not just to tell
a story but to support that story with measurable evidence, or to use the
story to explain the meaning of the numbers.
Creswell and Creswell underline that in this design, the researcher must
clearly define:
● The case itself,
In the final pages of Chapter 10, Creswell and Creswell bring all these
models together by offering guidance on how to actually write a mixed
methods procedures section. They stress clarity, structure, and full
explanation. A well-written procedures section must:
● Describe the integration strategy (where and how the data are mixed);
The chapter begins with a powerful reminder that before you even design
your research, you need to start writing it out. Writing isn’t just a form of
recording ideas—it’s a process of thinking them through. Creswell and
Creswell start by pointing out that any proposal, regardless of whether it is
qualitative, quantitative, or mixed methods, must offer the reader a clear
argument for why the research matters, how it will be done, and why it will
be credible.
The authors insist that every proposal, no matter the design, must tell a
logical story. That story is not about the researcher; it is about the research
problem, its urgency, its significance, and how it can be meaningfully
explored. This framing sets the tone for the rest of the chapter, which unfolds
into concrete writing strategies for qualitative, quantitative, and mixed
methods proposals, followed by deep guidance on ethical integrity.
Formats for Different Methodological Approaches
The mixed methods format is the most complex, and rightly so. The authors
provide a comprehensive outline that includes both sets of
components—quantitative and qualitative—as well as integration points, a
clear rationale for mixing, a visual diagram of procedures, and explanation
of timing, priority, and philosophical alignment. Mixed methods research
is not just about collecting two types of data. It’s about methodological
coherence.
After outlining the structures, the authors offer specific advice on writing
each section of a proposal. They encourage students to start writing early
and often, not to wait until everything is clear in their mind. Writing, in this
view, is a form of discovery. You write to think, not only to express what
you already know.
just get the first draft down, messy as it may be. Then revise. Then polish.
The point here is that the academic writing process should be iterative, and
writing is itself a cognitive activity that brings your ideas into clearer focus.
They recommend that researchers start with an outline, then draft sections
quickly, and then refine. You don’t need to be perfect at first. But what you
do need is structure, clarity, and consistency.
This section of the chapter turns the focus inward, examining what makes
writing not just functional, but effective and scholarly. Creswell and
Creswell argue that writing is a form of thinking, and that good writing
helps both the reader and the writer clarify the research’s purpose, logic, and
implications.
They offer insights drawn from literary writers, including Stephen King,
William Zinsser, and Annie Dillard, who stress the importance of developing
a writing habit, using concrete details, avoiding overwriting, and editing
for clarity and energy. These aren’t just writing tips—they’re strategies to
make your proposal readable and persuasive.
A great deal of attention is paid to the idea of coherence—the glue that holds
your manuscript together. This includes consistent use of terms, clear
transitions between sections, and paragraphs that build logically from
sentence to sentence. They introduce the “hook-and-eye” technique:
imagining each sentence as linked to the next, pulling the reader forward. If a
sentence doesn’t logically follow the one before, or if a paragraph jumps to a
new idea without transition, coherence is lost and the reader’s trust begins to
erode.
They also discuss voice and tense. Active voice is favored over passive voice
because it creates clarity and momentum. Strong verbs are better than
abstract nouns. Verb tense should match the part of the proposal: use past
tense for literature and completed studies; present or future tense for your
plans. Mixed methods researchers must be especially careful to signal which
part of the study is being discussed and which tense matches it.
Polishing your writing also means trimming the fat: eliminating unnecessary
qualifiers, modifiers, and jargon. Say what you mean directly. Don’t inflate
your language to sound “academic”—clarity and simplicity are the hallmarks
of excellent scholarly writing.
The second half of the chapter addresses the issue of ethics in research.
Creswell and Creswell treat this not as an afterthought, but as a foundation
of all good research. Ethics, they insist, is not just about avoiding
misconduct. It is about honoring your participants, your readers, and the
research community through transparency, fairness, and responsibility.
They outline specific ethical concerns that must be addressed at every phase
of research:
The chapter concludes with a call to develop your own writing practice, to
write every day, to reflect often, to stay engaged with the process. Read good
research. Read good writing. Write proposals not only to fulfill requirements,
but to craft compelling, coherent, and ethical studies.
Creswell and Creswell remind us that every decision we make in
writing—from choosing a topic to structuring our argument to citing a
source—is a reflection of our intellectual and ethical identity. A good
proposal is not just about research design—it is about communicating with
integrity, clarity, and care.