0% found this document useful (0 votes)
1 views

Intro to Research Methods.pdf

Educational research is a systematic application of the scientific method to understand and improve the teaching and learning process. It encompasses various types including basic, applied, and action research, and addresses a wide range of educational issues while relying on rigorous methodologies. Despite its challenges, educational research is essential for developing evidence-based solutions to contemporary educational problems.

Uploaded by

Chaimaa krt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

Intro to Research Methods.pdf

Educational research is a systematic application of the scientific method to understand and improve the teaching and learning process. It encompasses various types including basic, applied, and action research, and addresses a wide range of educational issues while relying on rigorous methodologies. Despite its challenges, educational research is essential for developing evidence-based solutions to contemporary educational problems.

Uploaded by

Chaimaa krt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 141

Definition

Educational research is rooted in a larger human effort to understand the


world around us. Throughout history, people have relied on five major
sources of knowledge to explain phenomena and solve problems. The first is
experience, which is the most basic way of knowing something. It’s the idea
that we learn through direct contact with the world—like when a child
touches a hot stove and learns it causes pain. It’s immediate and powerful, but
experience is also personal and subjective. What you experience might not
be true for someone else, and we can’t always trust memory or feelings to
give us reliable knowledge.

The second is authority. This happens when we accept something as true


because someone we trust tells us so. It could be a teacher, a doctor, or a
religious leader. In education, for instance, a student might follow a teaching
method simply because an experienced teacher recommends it. While
authority can be helpful, it’s limited. Just because someone has status
doesn’t mean they’re always right. That’s why scientific research doesn’t stop
at authority—it moves beyond it.

Then comes deductive reasoning, a method of thinking that moves from the
general to the specific. It starts with a broad principle and applies it to a
particular case. For example: all mammals have lungs; a rabbit is a mammal;
therefore, a rabbit has lungs. Deduction is logical, but its reliability
depends completely on the truth of the initial generalization. If the
starting point is flawed, the conclusion will be too.

On the other side, there’s inductive reasoning, which moves from specific
observations to general conclusions. If you observe many rabbits and all of
them have lungs, you might conclude that all rabbits in the world must have
lungs. Induction is powerful for building generalizations from real-world
patterns, but it's also uncertain—because you never know if the next case
will break the pattern.

To overcome the weaknesses of both inductive and deductive reasoning,


researchers developed the scientific approach, which is the foundation of all
true research. The scientific approach integrates both reasoning methods and
adds a crucial element: hypothesis testing. A hypothesis is a tentative
statement that suggests a relationship between variables. It’s not just a
guess—it’s a testable proposition that allows the researcher to predict what
should happen if certain conditions are met. The scientific method works by
observing a phenomenon, forming a hypothesis, deducing what should
happen if the hypothesis is correct, and then collecting data to test whether
those expectations are met. It is a cycle of observation, reasoning, testing,
and refining.

Now, research itself must be understood not just as information gathering or


reading academic texts. True research is a formal, systematic application of
the scientific method to investigate questions and generate new knowledge.
It is not research when someone prepares for a class by reading a few
sources. It’s not research when a student writes a literature review without
conducting original analysis. It’s not research when a journalist investigates
an event. Those are all worthwhile activities, but what separates research is
its impact on theory and its use of scientific procedures to validate findings.
Research is about discovering or verifying relationships, constructing
knowledge, and contributing to a theoretical framework that others can build
upon.

With that definition in place, we can now speak about educational research,
which is the application of this scientific method to problems in the field of
education. Educational research seeks to improve our understanding of the
teaching and learning process. It involves asking questions about student
behavior, curriculum effectiveness, classroom strategies, teacher
performance, educational policy, and more. It is both practical and
theoretical. The ultimate goal is to discover general principles or
explanations of behavior that educators can use to predict, explain, and
control events in educational contexts.

Several important definitions clarify the nature of educational research:

● Good described it as “the study and investigation in the field of


education.”

● Munroe emphasized that its final purpose is to “ascertain principles and


develop procedures for use in the field of education.”

● Mulay called it “any systematic study designed to promote the


development of education as a science.”

● Crawford saw it as “a systematic and refined technique of thinking.”

● J.W. Best defined it as an activity that aims to “develop a science of


behavior in educational situations.”

From these definitions, we draw out the key characteristics of educational


research. First, it is always problem-oriented—it tries to find solutions to
real educational issues, whether that’s improving teaching methods or
understanding student behavior. It emphasizes the creation of generalizations
and theories that can guide future action. It often involves inference,
meaning it goes beyond just the sample it studies—it tries to say something
meaningful about a larger population. It relies on primary data or existing
data used for new purposes, but in either case, it only accepts what can be
verified through observation. It may sometimes begin in a messy or
trial-and-error fashion, but it must ultimately apply rigorous analysis. It
must be objective and logical, free from personal bias. It requires expertise,
meaning the researcher must know what’s already known and how to study it
further. It involves a quest for the unknown and often depends on
imagination and interdisciplinary thinking. And perhaps most importantly,
it is not as exact as physical science. Human behavior is more complex and
harder to control than chemical reactions, which makes educational research
more challenging but also deeply human.

Now we come to the need and importance of educational research.


Education is not isolated—it is connected to philosophy, history, psychology,
sociology, and economics. Each of these fields influences how education is
practiced and understood. To develop sound educational theories, we must
understand the impacts of these disciplines. Furthermore, education is both a
science (a field with systematic knowledge) and an art (a practice requiring
skill and judgment). Because of this, we need research to expand the
scientific knowledge base and refine the practical methods we use to teach
and learn. Modern education faces enormous challenges—rising enrollments,
student diversity, discipline issues, curriculum reforms, technological change.
These problems can’t be solved by tradition or guesswork. They require
evidence-based solutions that only research can provide. As educational
goals shift from fixed content to lifelong personal development, as society
and technology evolve, the scope of educational research must also grow to
match the complexity of what education is now expected to accomplish.

Educational research is classified into three major types. The first is basic
research (also known as fundamental or pure research). This aims to expand
theoretical knowledge without any immediate application in mind. It is
concerned with discovering principles that have universal validity. It relies on
rigorous methodology, careful sampling, and systematic procedures to
build theories. The second type is applied research, which aims to solve
immediate practical problems. It is conducted in real-life settings like
classrooms and often focuses on the effectiveness of specific interventions or
teaching strategies. Although it may not produce general laws, it is extremely
useful in guiding practice. The third type is action research, which is unique
because it is conducted by practitioners themselves—teachers, principals,
counselors—who investigate their own practices to improve them. It is
collaborative, practical, and often cyclical, involving a process of planning,
acting, observing, and reflecting. The goal is not to build theory but to bring
about local change and professional growth.

The scope of educational research is incredibly broad. It spans every aspect


of education. In educational philosophy, it explores values, ideologies, and
aims of education. In educational sociology, it studies how social structures,
cultural norms, and community dynamics affect education. In educational
psychology, it examines cognitive and emotional development, motivation,
personality, and social influences on learning. In curriculum development,
it investigates how to structure content, select materials, and meet the needs
of learners and society. In educational technology, it analyzes the impact of
tools, media, and instructional methods. In measurement and evaluation, it
builds and tests assessment tools and explores how to fairly and accurately
measure learning. It extends to guidance and counseling, comparative
education, teacher education, and specialized areas like inclusive
education, distance education, vocational education, value education, and
environmental education. In short, if it touches teaching or learning in any
way, it can be studied scientifically through educational research.

Finally, educational research has several limitations that must be


acknowledged. Human behavior is complex and varies widely across
contexts. This makes generalizations difficult. Observations are often
subjective, and replication of studies is hard because no two classrooms or
students are ever exactly alike. There is often an interaction between the
observer and the observed, which can affect the behavior being studied.
Control over variables is limited, unlike in laboratory sciences. And
measurement tools in education are still evolving—many concepts like
creativity, motivation, or critical thinking are hard to quantify. Despite these
challenges, educational research remains an indispensable tool for improving
education, deepening our understanding of learning, and shaping more
effective and just educational systems.

TYPES & CHARACTERISTICS OF RESEARCH

To truly understand what research is, we must first define its essential
characteristics. Research, in its most authentic form, is not random
information-gathering. It is a structured inquiry that uses acceptable
scientific methodology to solve problems and create knowledge that is
valid, verifiable, and (ideally) generalizable. Every real research process must
contain certain qualities that make it rigorous and trustworthy.

The first essential quality is that research must be controlled. This means that
when studying the relationship between variables, we must minimize or
eliminate the influence of outside factors. In physical sciences, control is
easier because experiments often happen in laboratories. In the social
sciences, especially in education, perfect control is difficult because people
are involved—each with different emotions, histories, and environments.
Still, control remains a goal. Even if we cannot eliminate external factors
entirely, we must account for them and understand how they influence the
phenomena we are studying.

The second is that research must be rigorous. This refers to the thoroughness
with which the researcher designs, conducts, and analyzes the study.
Procedures must be justified, logical, and aligned with the goals of the
inquiry. In other words, you don’t just pick a method because it’s easy—you
use it because it fits your question, your variables, and your context.

Third, research must be systematic. It is not a random process. There is a


clear, logical order: identifying the problem, reviewing literature, developing
hypotheses or questions, choosing a design, collecting data, analyzing it,
interpreting findings, and drawing conclusions. Each step must follow the
previous one with coherence.
Fourth, research must be valid and verifiable. This means that whatever
conclusions you reach must be supported by data, and someone else using
your methods should be able to verify your results. Without verifiability,
research becomes speculation. The goal is not just to believe your own results
but to produce findings that others can trust and confirm.

Fifth, research is empirical. It deals with real-world data—things that can be


observed, measured, or experienced. You don’t base research on dreams,
beliefs, or untested assumptions. You gather evidence, from reality, and you
use it to support or reject your hypotheses.

Lastly, research must be critical. Every part of the process—your


assumptions, your data, your methods, your interpretation—must be open to
scrutiny. A good researcher questions everything, not just in others’ work, but
in their own. The idea is to produce knowledge that can withstand (resist)
challenge, debate, and inspection.

Now that we’ve defined what research must be, we can examine its types,
which are classified from three different perspectives: the application of
findings, the objectives of the study, and the mode of enquiry.

I. Types of Research by Application Perspective

Here, research is classified based on whether its findings are used to solve
real-world problems or to build theoretical understanding.

1. Pure Research (also called fundamental or basic research) is done for the
sake of knowledge itself. Its purpose is to test and refine theories, methods, or
ideas. It might not have any immediate practical use. For instance, a study on
how human memory stores information under stress may not help a teacher
today, but it builds knowledge that could inform educational psychology in
the future. Pure research deals with abstractions, conceptual problems, and
general principles. It is intellectually driven.
2. Applied Research, on the other hand, deals directly with practical
problems. It applies theories and methods developed through pure research to
real-life situations. In education, applied research might investigate whether a
specific teaching method improves reading skills among third-grade students.
The goal is not to build a general theory, but to solve a current problem or
improve practice. Applied research is what most teachers and policymakers
turn to when they want immediate guidance and that’s what we call action
research.

It’s important to know that pure and applied research are not opposites—they
often inform one another. Pure research may eventually lead to practical
applications, and applied research may raise theoretical questions that push
basic research further.

II. Types of Research by Objectives Perspective

This classification is based on what the researcher is trying to achieve with


the study. There are four main types:

1. Descriptive Research aims to systematically describe a situation, event,


population, or phenomenon. It doesn’t ask why things happen or explore
relationships—it simply records what exists. For example, a study that
collects data on how many students attend online classes in rural areas is
descriptive. It provides facts and summaries without interpreting causes or
effects.

2. Correlational Research looks for relationships or associations between


two or more variables. It does not imply causality—it simply tells you
whether two things are linked. For instance, if you investigate the relationship
between sleep hours and academic performance among high school students,
and you find that more sleep is associated with higher grades, that’s a
correlational finding. You don’t yet know if sleep causes better
performance—but you’ve established a link.
3. Explanatory Research goes one step further and seeks to explain why
certain relationships exist. It’s not enough to show that stress and failure are
related—you try to explain the mechanisms behind it. This research is often
more theoretical and requires deeper analysis, often involving the testing of
hypotheses about cause-and-effect relationships.

4. Exploratory Research is used when there is little or no existing


knowledge about a phenomenon. It’s preliminary and open-ended, often done
to identify new questions or directions. For example, if no one has studied
how Moroccan students use social media for studying English, you might
begin with an exploratory study to learn what platforms they use, how they
use them, and whether they believe it helps them. Exploratory research is
useful for discovering problems or generating hypotheses for future studies.

In practice, most research projects combine these objectives. A study can be


descriptive and correlational. It can start as exploratory and become
explanatory later.

III. Types of Research by Mode of Enquiry Perspective

This third lens classifies research based on how the data is collected and how
flexible or structured the process is. There are two main types:

1. Quantitative Research follows a structured approach. The research


questions, hypotheses, sampling, data collection tools, and analysis methods
are all decided in advance. It relies on numerical data and statistical analysis.
For example, if you design a questionnaire with closed-ended questions,
administer it to 500 students, and use SPSS to run t-tests or correlations, you
are doing quantitative research. It is best suited for studies where the goal is
measurement, comparison, and generalization.

2. Qualitative Research uses an unstructured approach. It allows for


flexibility and exploration. You might start with broad questions and refine
them as you go. The data is often textual or visual—like interviews,
observations, diaries, or case studies. For instance, sitting in a classroom,
observing students, and writing detailed field notes would fall under
qualitative research. It seeks to understand meaning, context, and human
experience rather than to quantify it.

It’s important to note that qualitative and quantitative research are not
enemies. They are complementary. Many studies use a mixed methods
approach, which combines both. For example, you might start with
qualitative interviews to explore student attitudes, and then use that data to
build a structured questionnaire for a larger quantitative study.

Research Paradigms

Beneath these three perspectives lie deeper philosophical foundations called


research paradigms. A paradigm is a worldview or a set of beliefs about
how knowledge is constructed and understood.

There are two dominant paradigms in social science research. The first is the
positivist paradigm, also called the scientific or systematic approach. It
assumes that reality is objective and measurable. This paradigm fits well with
quantitative research. It emphasizes objectivity, control, and statistical
analysis.

The second is the naturalistic paradigm, also known as the qualitative,


ethnographic, or ecological approach. It assumes that reality is complex,
subjective, and socially constructed. This paradigm supports qualitative
research, focusing on meaning, context, and participant experience.

These paradigms are not mutually exclusive. In fact, many researchers today
argue that the purpose of the study should determine the paradigm—not
personal preference or academic tradition. Sometimes, a positivist lens helps
answer the question. Other times, a naturalistic lens is better suited. The best
researchers learn to switch between paradigms depending on the nature of the
inquiry.

Final Reflection

In summary, understanding the types and characteristics of research allows


you to identify what kind of study you are doing, what rules and standards
apply to it, and how it fits into the larger world of knowledge creation.
Whether it is pure or applied, descriptive or explanatory, qualitative or
quantitative—it must always be controlled, rigorous, systematic, empirical,
critical, and verifiable. These are not options. They are the very foundation
that separates real research from assumption, bias, and noise.

THE RESEARCH PROCESS

The research process is a sequence of interrelated, often overlapping steps


that together form the complete journey of a research study. While the
process is presented in a linear fashion, in reality, it is rarely so clean. Steps
may be revisited, modified, or looped through multiple times. However, to
maintain clarity and order, we can outline the process in eleven essential
steps:

1. Formulating the Research Problem

This is the first and arguably the most critical step in the entire research
process. Before anything else, you need to identify what you are trying to
study. A research problem is not just a topic—it is a clearly defined question
or issue that needs investigation. There are two broad types of problems:
those dealing with the state of nature (descriptive or factual situations) and
those concerned with relationships between variables (causal or
correlational issues). The formulation involves narrowing down a broad idea
into a specific, researchable problem.
To do this effectively, you must fully understand the problem, often by
discussing it with colleagues or experts. Then, you rephrase it into
operational terms—language that is specific, measurable, and clear. This step
also includes defining any key terms or variables involved in the problem. A
well-defined research problem determines the direction of the entire study:
the data you’ll need, the methods you’ll use, and how you’ll analyze and
interpret your findings.

2. Extensive Literature Survey

Once your problem is defined, you need to immerse yourself in existing


knowledge. This is where you conduct a literature review, exploring
previous research, theories, and data related to your problem. The purpose
here is to understand what has already been discovered, identify gaps, refine
your focus, and avoid duplication.

There are two main types of literature to examine: conceptual literature,


which deals with theoretical ideas and frameworks, and empirical literature,
which includes real-world studies that have tested similar ideas. A thorough
literature review helps you sharpen your research problem and prepares you
for the next step: hypothesis development.

3. Development of Working Hypotheses

A working hypothesis is a tentative statement that predicts a relationship


between variables. It acts as a guidepost for your research. This hypothesis
will be tested using data, so it must be clear, specific, and measurable. For
example: “Students who sleep more than seven hours perform better in
language exams than those who sleep less.”

Hypotheses arise from a combination of your literature review, discussions


with experts, preliminary data, and logical reasoning. They help you delimit
the scope of your study, sharpen your focus, and define what kind of data
you need. In exploratory research, you might not have a
hypothesis—because the aim is to explore unknown areas. But in most cases,
the presence of a working hypothesis is essential.

4. Preparing the Research Design

A research design is the blueprint of your study. It explains how you will
carry out your research, from selecting subjects to collecting and analyzing
data. A good research design ensures that you collect valid, reliable, and
relevant data using minimal resources (time, money, and effort).

The research design depends on the purpose of your study—whether it’s


exploratory, descriptive, diagnostic, or experimental. There are many
types of designs, including experimental, non-experimental,
cross-sectional, longitudinal, comparative, and case study designs. The
researcher must also decide whether to use qualitative, quantitative, or
mixed methods.

Key elements to consider in a research design include:

● What information is needed

● How it will be gathered

● Who will gather it

● How long it will take

● What tools or instruments will be used

● What the cost and resource requirements will be


5. Determining Sample Design

In research, the entire group under study is called the population or


universe. Studying the whole population is often impractical, so we select a
sample, which is a smaller group meant to represent the whole. The process
of selecting this group is called sample design.

There are two broad categories of sampling:

● Probability sampling, where each member of the population has a


known chance of being selected (e.g., simple random sampling,
stratified sampling, cluster sampling)

● Non-probability sampling, where the selection is based on


non-random criteria (e.g., convenience sampling, purposive sampling,
quota sampling)

Choosing the right sampling method is vital. Poor sampling leads to bias,
inaccurate results, and invalid conclusions. In some cases, mixed sampling
techniques may be used.

6. Collecting the Data

Data collection is where your research becomes real. It is the process of


gathering information from your chosen sample using your selected tools.
There are two types of data:

● Primary data, collected firsthand through tools like surveys,


interviews, observations, or experiments
● Secondary data, gathered from existing sources like books, articles,
reports, or databases

Methods of data collection include:

● Observation (watching behavior directly)

● Personal interviews (face-to-face questioning)

● Telephone interviews

● Mail questionnaires

● Schedules (structured interviews or forms filled by enumerators)

The method you choose depends on the nature of your study, your resources,
the desired accuracy, and the population involved.

7. Execution of the Project

Execution means carrying out the research exactly as designed. This is where
planning becomes action. The researcher must ensure that data collection is
consistent, systematic, and of high quality. If using interviews or surveys,
interviewers need training, and field checks should be conducted to verify
accuracy and honesty.

Common problems during execution include non-response, errors in data


entry, or unexpected disruptions. The researcher must be proactive in
addressing these issues, keeping everything under statistical control so that
the data collected meets the intended standards.
8. Analysis of Data

Once data is collected, it needs to be organized and processed. This begins


with editing, coding, and tabulation—transforming raw data into structured,
interpretable formats. For example, responses to open-ended questions may
be categorized into themes, while numerical data is formatted into tables.

Analysis involves the use of statistical techniques—whether descriptive


(mean, median, mode, standard deviation) or inferential (t-tests, chi-square
tests, correlation, regression). Computers and software like SPSS or Excel are
often used here. The purpose is to reveal patterns, test hypotheses, and find
meaningful relationships between variables.

9. Hypothesis Testing

Once the data is analyzed, the next step is to test whether your initial
hypothesis holds true. This is done using statistical tests such as:

● t-test (for comparing means)

● Chi-square test (for associations between categorical variables)

● F-test (for comparing variances)

● ANOVA (analysis of variance)

Through hypothesis testing, you decide whether to accept or reject your


original assumption. This step is crucial in determining the validity of your
findings and whether they are statistically significant or could have
occurred by chance.

10. Generalisations and Interpretation


If the hypothesis is supported by the data, the researcher may move toward
generalization—drawing broader conclusions that apply to the larger
population. This is how theories are developed. If no hypothesis was initially
proposed (such as in exploratory studies), the researcher may still interpret
findings in light of existing theories or frameworks.

Interpretation involves explaining what the results mean, why they matter,
and how they relate to existing knowledge. It also involves identifying new
questions or future areas of research. Good interpretation connects results to
theory, context, and application.

11. Preparation of the Report or Thesis

Finally, the research must be documented in a formal report or thesis. This


is the finished product that communicates your process, findings, and
implications. A standard report includes:

● Preliminary pages (title, abstract, acknowledgements)

● Main text (introduction, methodology, results, discussion, conclusion)

● End matter (bibliography, appendices, tables, figures)

The report must be written in clear, concise, and objective language. It


should avoid vague expressions like “it seems” or “maybe.” It should include
confidence intervals, discuss any limitations, and be transparent about the
constraints faced during the research.

Final Thoughts

The research process is not just a checklist—it’s a dynamic, recursive, and


intellectual journey. Each step builds on the previous one, but researchers
often circle back, refine, adapt, and improve along the way. Mastering these
steps means not just knowing what they are, but understanding how they
connect, why they matter, and what each one contributes to the production
of meaningful, valid, and useful knowledge.

THE SELECTION OF A RESEARCH APPROACH

(Fully explained and expanded)

In any serious academic or professional research project, the very first


foundational decision a researcher must make is choosing a research
approach. But this isn’t just a question of “what tools will I use to gather
data?”—it’s far more philosophical and strategic. A research approach is a
comprehensive plan that guides every part of a study, from the broad
worldview you hold about reality and knowledge, all the way down to the
specific techniques you use to collect, analyze, and interpret your data. It’s
like the architecture of a building: the style, materials, structure, and function
are all interdependent. If any piece is weak, the whole thing can collapse.

Three key elements form the structure of a research approach:

1. Philosophical worldviews

2. Research designs

3. Research methods

These components are not chosen randomly—they must cohere with one
another and with the nature of the research problem. The approach you
choose will shape how you view your data, how you ask your questions, how
you analyze what you find, and even what you consider a valid answer. For
example, someone trying to measure the impact of a specific drug will need a
highly controlled, quantitative approach. Someone studying how a group of
refugees rebuilds their identity through storytelling will need a qualitative,
open-ended, interpretive approach. So the selection of a research approach
isn’t just technical—it reflects your beliefs about reality, about how
knowledge is formed, and what kind of understanding you are aiming to
produce.

THE THREE MAIN RESEARCH APPROACHES

There are three dominant research approaches that every researcher must be
familiar with:

Qualitative Research

Qualitative research is not concerned with counting things or measuring them


with numbers. Instead, it focuses on understanding human meaning—how
people experience the world, how they make sense of situations, and how
their realities are shaped by their language, culture, and interactions. It is
often used when the goal is to explore a new area, to understand a complex
phenomenon, or to amplify the voices of those who are often not heard in
traditional data.

For example, imagine you want to study how young immigrant girls in
Morocco experience the transition to university. A qualitative approach might
involve in-depth interviews, participant observations, and even collecting
personal diaries. The point is not to generalize or predict—but to understand
their lived realities, in depth, from their own perspectives. Qualitative
researchers often work inductively, meaning they don’t start with a fixed
theory. Instead, they let the theory emerge from the data they gather, through
a process of reflection, pattern identification, and interpretation.

Data is typically collected in the participant’s setting—in their real


environment, not in a lab or via a standardized form. The final report is often
written flexibly—in narrative form, with a focus on complexity,
contradiction, and multiple meanings.

Quantitative Research

In contrast, quantitative research is about testing theories by measuring and


analyzing numerical data. It operates on the belief that reality is external and
objective, and that we can understand this reality by observing patterns,
controlling variables, and applying statistical models. The quantitative
researcher believes in generalizable truth: if you test a relationship in a
sample and find it statistically significant, you can likely apply that finding to
a larger population.

Let’s say you want to test whether students who attend private tutoring
centers perform better on standardized English tests than those who don’t.
You would formulate a hypothesis, select variables (e.g., hours of tutoring,
test scores), and measure them with instruments such as surveys or test
results. Your goal would be to use statistical analysis to either support or
reject your hypothesis.

Quantitative research follows a deductive logic—you start with a theory,


break it into variables, test it, and then refine or reject it. The report is rigidly
structured: introduction, literature review, method, results, discussion.

Mixed Methods Research

Mixed methods research sits between qualitative and quantitative. It doesn’t


reject either—it combines both, in order to produce a richer, more nuanced
understanding. The idea is simple: some problems are too complex to be
captured by only one type of data. For instance, a survey might show that
70% of students are anxious before oral presentations (quantitative), but it
won’t tell you why they feel that way or how it affects their long-term
learning. That’s where qualitative interviews would come in, allowing
students to narrate their experiences.

Mixed methods researchers integrate qualitative and quantitative data in one


study—either simultaneously or in phases—to achieve what neither approach
could do alone. But this isn’t just about collecting two types of data—it’s
about carefully weaving them together to complement, explain, or expand
one another.

THE PHILOSOPHICAL WORLDVIEWS

Before you choose a method or design, you need to ask: what do I believe
about knowledge? This is where philosophical worldviews come in. These
are not optional—they are the foundational belief systems that influence
every research decision. There are four major worldviews that you need to
deeply understand:

Postpositivism

This is the worldview most aligned with quantitative research. It is rooted


in the scientific tradition and assumes that there is an objective reality out
there which can be measured and tested—but acknowledges that we can
never know it with absolute certainty. That’s why it’s called post-positivism
(after positivism): it accepts that human knowledge is fallible, that
hypotheses can only be supported, not proven, and that bias is always a
risk.

Postpositivists start with a theory, define testable variables, and use tools
like surveys, experiments, and statistical analysis to reduce complexity into
measurable parts. Their goal is to discover laws or causal relationships that
can be generalized. For example, you might ask: “Does access to Wi-Fi
increase student achievement in remote schools?” A postpositivist would try
to isolate variables and control other factors to test this.
Constructivism

This worldview is at the heart of qualitative research. It argues that reality is


not objective, but constructed by human beings through their experiences,
cultures, and languages. There is no single truth—there are multiple realities,
all shaped by context. So the job of the researcher is not to test a hypothesis,
but to interpret meaning—to listen, observe, participate, and try to
understand how people make sense of their world.

Constructivist researchers focus on context, culture, and subjective


experiences. For example, instead of asking, “Does group work improve
academic performance?” a constructivist might ask, “How do students
experience collaboration in the classroom?” There’s no predefined
theory—they build theory from the data. They are aware that their own
background influences interpretation, so they position themselves within the
research instead of pretending to be “neutral.”

Transformative

The transformative worldview challenges both positivism and constructivism


for not going far enough. It believes that research must actively challenge
injustice and empower marginalized voices. This approach is political,
ethical, and action-oriented. It sees research as a tool not just for
understanding, but for changing oppressive structures in society.

Pragmatism

Pragmatism is the worldview behind mixed methods research. It doesn’t


commit to any one philosophy of truth. Instead, it asks: What works? What
helps solve the problem? Pragmatists believe that the research question
should dictate the method, not the other way around. If both numbers and
stories are needed to understand a problem, then both should be used.
Pragmatic researchers are flexible. They value both subjective meaning and
objective measurement. They collect data in stages, from different sources,
and use whatever tools are best for the job. They may start with a survey and
then follow up with interviews, or begin with observations and then design an
experiment. For pragmatists, the only real test of research is whether it helps
us understand and improve something in the real world.

RESEARCH DESIGNS AND METHODS

Each research approach contains specific designs. These are like the
architectural plans for the building—they lay out the structure of how you’ll
collect and analyze your data.

● In quantitative research, common designs include surveys and


experiments.

● In qualitative research, the major designs are narrative research,


phenomenology, grounded theory, ethnography, and case studies.

● In mixed methods, you’ll encounter designs like convergent,


explanatory sequential, and exploratory sequential.

Each design has its logic, rules, procedures, and applications. Choosing the
wrong design—or misunderstanding how to implement it—can destroy the
credibility of your research.

The methods refer to the actual tools and techniques used to gather data:
interviews, questionnaires, focus groups, statistical software, field notes,
coding procedures, etc. These are the instruments you use to bring your
research design to life. But they must always align with your worldview and
design. You can’t use a postpositivist mindset and apply open-ended
interviews without thinking through how they’ll be analyzed, interpreted, and
made rigorous.

QUANTITATIVE RESEARCH DESIGNS

Quantitative designs are tightly structured, logical, and built to test


hypotheses or examine relationships between variables through the use of
numerical data. These designs often aim to explain, predict, or control
phenomena by identifying cause-effect links or patterns of association.

Two of the most frequently used quantitative designs are surveys and
experiments:

1. Survey Research

Survey research is about collecting data from a sample to make


generalizations about a larger population. The goal is to describe trends,
opinions, attitudes, or characteristics using a standardized questionnaire or
interview format. A survey might involve a one-time snapshot of responses
(called cross-sectional) or track changes over time (longitudinal).

For example, you might use a survey to investigate how Moroccan university
students perceive AI-based learning platforms. You’d create a questionnaire
with closed-ended questions (e.g., Likert scales), administer it to a
representative sample, and then analyze the responses statistically to identify
trends or differences across variables like gender, field of study, or level of
experience with technology.

The power of survey research lies in its ability to quantify attitudes or


behaviors and compare groups. However, it is only as good as its
design—poor sampling, biased wording, or low response rates can undermine
its validity.
2. Experimental Research

Experimental research is the most powerful design for determining causal


relationships. Its goal is to find out whether a specific treatment or
intervention has an effect on an outcome. The defining feature of
experimental research is manipulation: the researcher deliberately introduces
a change (the independent variable) to see its effect on something else (the
dependent variable).

In a true experiment, participants are randomly assigned to either a


treatment group (which receives the intervention) or a control group (which
does not). This randomization helps eliminate bias and strengthen the claim
that any differences observed are due to the treatment, not to chance or other
variables.

For instance, you might want to test whether a new online grammar tool
improves writing accuracy in EFL students. You could randomly assign half
of a class to use the tool for a month, while the other half uses traditional
methods. Then, you compare their writing test scores. If the tool-users
perform significantly better, you may conclude that the tool is effective.

There are also quasi-experiments, where random assignment isn’t possible,


and single-subject designs, often used in special education or psychology,
where one person’s progress is tracked over time with and without the
intervention.

QUALITATIVE RESEARCH DESIGNS

Qualitative research designs are used to understand lived experiences,


social processes, and cultural phenomena. These designs are fluid,
adaptive, and rooted in the participants’ context and meaning-making. Unlike
quantitative designs, they don’t aim to generalize or predict—they aim to
interpret and explore.
1. Narrative Research

Narrative research focuses on the stories of individuals. The researcher


collects personal accounts of life experiences—often through in-depth
interviews—and then “restories” them into a coherent narrative. The goal is
to understand how people make sense of their lives and what meanings they
assign to their experiences.

For example, you could study the narratives of first-generation university


students in Morocco. You’d ask them to share stories of their educational
journeys, challenges, and family expectations. You’d analyze the structure,
themes, and symbols in their stories to uncover how identity, resilience, and
belonging are constructed.

Narrative research is deeply personal. It values voice, memory, emotion,


and context, and often integrates the researcher’s own positionality into the
final narrative.

2. Phenomenological Research

Phenomenology is about capturing the essence of lived experience. The


researcher investigates a phenomenon that several people have experienced
and tries to describe its core meaning.

Let’s say you want to study the experience of anxiety during public speaking
among EFL learners. You would interview multiple students, identify
common patterns in how they describe their anxiety, and distill those into a
rich description of what that experience is like across participants. The aim is
not explanation or comparison, but deep description of experience.

This design is grounded in philosophy and emphasizes reflection, empathy,


and depth.
3. Grounded Theory

Grounded theory is used when you want to develop a new theory based on
data gathered directly from participants. It is especially useful when existing
theories don’t fully explain the process or behavior you’re studying.

In grounded theory, data collection and analysis happen simultaneously. You


start with interviews or observations, then code the data, identifying
categories and patterns. As your analysis deepens, you refine these into core
concepts and relationships, eventually forming a theoretical model.

For instance, if you're researching how rural teachers adapt to unexpected


online teaching due to COVID-19, and there's no clear theory explaining this
process, you might conduct grounded theory research to build one.

4. Ethnography

Ethnography comes from anthropology and involves studying a cultural


group in their natural environment over a long period. The goal is to
understand their shared practices, language, rituals, and meanings.

An educational ethnography might explore how students in a religious


boarding school in Morocco experience discipline and spiritual identity.
You’d live among them, observe their daily life, and document how culture
shapes their educational experience.

Ethnography demands immersion, long-term observation, and a deep


understanding of context.

5. Case Study

A case study provides an in-depth analysis of a single case—which could be


a person, group, program, event, or institution—within its real-life context.
Case studies are “bounded,” meaning the case has limits in time, space, or
activity.

For example, you could do a case study on a single school that implemented a
bilingual education policy. You’d explore how it was planned, how teachers
were trained, how students responded, and what outcomes emerged.

Case studies use multiple data sources—interviews, documents,


observations—to build a full picture of the case.

MIXED METHODS RESEARCH DESIGNS

Mixed methods research is not just about combining interviews with surveys.
It’s about integration—using both qualitative and quantitative elements to
provide a comprehensive understanding of a problem. The designs are
structured around the timing of data collection and the purpose of combining
the data.

1. Convergent Design

In a convergent design, you collect both types of data at the same time,
analyze them separately, and then merge the results. This helps you validate
findings across methods or see where they confirm or contradict one
another.

For example, you might survey teachers about their attitudes toward inclusive
education (quantitative) while also interviewing a few of them to understand
the reasons behind those attitudes (qualitative). You compare the results to
see how they complement each other.

2. Explanatory Sequential Design


This design begins with quantitative research, followed by qualitative
research that explains or deepens the findings. It's especially useful when
you need statistical patterns first, but then want to explore the human stories
behind them.

Imagine you conduct a survey showing that female STEM students have
lower confidence in math than their male peers. You might then interview a
small number of these students to understand why they feel that way and
what social or educational factors contribute to it.

3. Exploratory Sequential Design

The opposite of the previous design, this one starts with qualitative
exploration, then uses quantitative tools to follow up. You might conduct
focus groups to explore student engagement, and then develop a survey
instrument based on the themes that emerged, which you distribute to a larger
sample.

This design is great for building new instruments, generating variables, or


creating models that you can later test quantitatively.

SUMMARY OF DESIGNS

Each design has strengths and limitations. What matters is that your choice of
design fits:

● Your research question

● Your philosophical worldview

● The type of data you need


● The audience for your findings

Using the wrong design can lead to mismatched questions, inappropriate


methods, and weak conclusions. But when the design aligns with your
purpose and values, your research becomes not just effective, but meaningful.

The Research Problem and Questions

(Why what you’re studying shapes how you study it)

A research problem is more than just a topic—it’s the specific issue, gap, or
need that your study seeks to address. This problem often emerges from a
close reading of the existing literature, where you notice something missing,
contested, or unexplored. But just identifying a problem isn’t enough. You
must also choose an approach that best allows you to investigate it in a valid
and meaningful way.

If your problem involves identifying factors that influence an outcome,


testing a theory, or predicting relationships between variables, then a
quantitative approach is the most appropriate. For instance, if you’re trying
to find out whether a certain teaching method improves grammar accuracy
among Moroccan high school students, that’s a question that involves
measurement, comparison, and likely statistical analysis. It’s best handled
with numbers, instruments, and structured data.

On the other hand, if your problem involves understanding a process,


exploring a new phenomenon, or giving voice to marginalized groups,
then a qualitative approach is better. If, for example, you want to explore
how female engineering students in Morocco experience isolation in a
male-dominated classroom, your goal is not to test a theory, but to capture
personal stories, generate themes, and understand complex emotional
and social dynamics. A qualitative approach, using interviews or
observations, is the right tool for the job.

But what if your problem requires both kinds of insight? Let’s say you want
to study how teachers’ beliefs about inclusion affect their classroom
practices. You might start by surveying a large sample of teachers to identify
general attitudes, then follow up with interviews to understand how those
beliefs play out in real classrooms. In this case, you’re looking at both
breadth and depth—so a mixed methods approach is ideal.

Mixed methods are particularly useful when one type of data alone doesn’t
tell the full story, or when one phase of the research raises questions that
require a different method to answer. These designs are powerful, but they
also demand a lot of time, expertise, and planning.

Personal Experience

(Why who you are affects what you do)

Your training, skills, and comfort zone as a researcher naturally affect


which approach you gravitate toward. If you’ve been trained in statistics,
experimental methods, and scientific writing, you may feel more confident
using a quantitative approach. If, on the other hand, you enjoy storytelling,
interpretation, interviews, and fieldwork, then qualitative research may feel
more natural to you.

But it’s not only about comfort—it’s also about the integrity of the
research. If you’re doing a qualitative study, but you don’t know how to
conduct a proper interview or analyze themes in narrative data, you risk
misinterpreting your results. Likewise, if you’re running a quantitative
experiment but you don’t know how to control for confounding variables or
apply the right statistical tests, your findings may be invalid.
That’s why mixed methods research can be particularly challenging—it
demands a dual skill set. You need to be able to switch between numerical
precision and narrative interpretation. You need the patience to do twice the
amount of data collection and the analytical clarity to combine different kinds
of evidence into a coherent conclusion. Researchers who pursue mixed
methods must be methodologically bilingual.

Your philosophical orientation also plays a role. Some researchers


genuinely believe that numbers are the best way to understand the world.
Others believe that meaning is too complex to be reduced to variables. Some
are more pragmatic—they’ll use whatever tools help them answer the
question. Knowing yourself—your beliefs, your comfort level, your
training—is part of the honest intellectual work of choosing a method.

The Audience

(Why your reader or consumer matters)

Finally, the audience for your research—your professor, your thesis


supervisor, a journal editor, a policymaker, or even a funding agency—will
have expectations that can influence your choice of approach. Certain
disciplines, institutions, or communities have established norms about what
counts as “rigorous” research.

For example, journals in psychology or economics often favor quantitative


studies with clear statistical results. If you submit a qualitative narrative study
to those journals, it may not be well received—not because it’s bad, but
because it doesn’t match their standards of evidence. On the other hand,
journals in education, anthropology, or sociology are often more open to
qualitative or mixed approaches.

If you’re writing a master’s thesis, your advisor’s expertise and


preferences may matter. If your advisor is a qualitative researcher, they may
be more comfortable guiding a grounded theory study than an experiment.
This doesn’t mean you can’t push boundaries—but you must be strategic.

The key is to align your research with your audience’s values, standards,
and interests without compromising the integrity of your work. You’re not
only producing knowledge—you’re communicating it, and that means
understanding who your readers are and what they expect.

Final Thoughts

The choice of a research approach is not a superficial decision. It shapes


every part of your project—from the questions you ask to the way you
present your findings. You must consider:

● The nature of your research problem

● Your skills, training, and philosophy

● The needs and values of your audience

There is no single “correct” approach. The best researchers are those who
match their methods to their mission, who respect the complexity of their
topic, and who make these decisions consciously and coherently.

THE LITERATURE REVIEW

In the process of designing a research proposal, selecting a research


approach—whether qualitative, quantitative, or mixed methods—is a critical
step. However, once the broad methodological direction is determined, the
next foundational task is conducting a literature review. This is not just a
formality or a citation exercise. It is a rigorous, strategic process that helps
the researcher understand the current state of knowledge, identify gaps
and contradictions, and justify the importance and feasibility of their own
study. The literature review plays a central role in framing the study,
narrowing its focus, supporting its relevance, and positioning it within the
larger academic conversation.

The chapter opens by reminding us that selecting a topic comes before the
literature itself. The researcher must first clarify what they are studying and
why it is meaningful. A topic should be written in simple, clear terms—like a
phrase or a working title—to provide continuous orientation during the
research design process. Something like “my study is about how Moroccan
high school teachers use digital tools in remote learning” might serve as a
preliminary anchor. The authors stress that novice researchers often try to
imitate overly complex or abstract writing they see in published journal
articles, which can cloud their initial thinking. In reality, the best research
topics start with straightforward, focused ideas. As the project develops,
these ideas will evolve into more complex structures, but they must begin
grounded and accessible.

A working title is useful not just for the researcher, but for communicating
the study’s direction to others—supervisors, colleagues, committees. The
authors recommend creating a brief phrase or even posing a direct question
such as, “What influences student motivation in online learning
environments?” These types of expressions offer both conceptual clarity and
focus. Importantly, before moving forward, the researcher must reflect on
whether the topic is researchable (can it realistically be studied with the
resources and access available?) and whether it should be researched (does it
contribute something meaningful to the field?). This is not a matter of
opinion—it must be evaluated through academic criteria.

For a topic to be researchable, certain practical conditions must be met:


availability of participants, access to relevant sites or data, availability of
time, and the researcher’s own competence in relevant tools and methods
(like data analysis software, language proficiency, etc.). A study might be
impossible if these are lacking. But even if a study is doable, the researcher
must still ask if it should be done. The chapter highlights several reasons a
study might be worth pursuing: it adds to the existing body of knowledge,
replicates previous studies in a new setting, amplifies underrepresented
voices, promotes social justice, or contributes to the researcher’s own
intellectual growth. A study that meets one or more of these criteria has
academic and practical justification.

At this stage, the researcher is advised to consult others—peers, supervisors,


experts—to validate their topic’s relevance. Creating a one-page summary
with the problem, central research question, potential data sources, and
significance of the study helps clarify these decisions and makes it easier to
get feedback. A strong topic will be one that not only reflects the researcher’s
personal interests but also aligns with broader scholarly conversations and
audience needs.

UNDERSTANDING THE LITERATURE REVIEW

Once the topic is confirmed as worth studying, the next task is to review the
relevant literature. The literature review serves several essential functions.
First, it demonstrates the researcher’s familiarity with previous studies on
the topic. This is critical not only for scholarly credibility but also to avoid
duplication, identify gaps, and build upon existing knowledge. Second, it
places the study within an ongoing academic dialogue, showing how it
contributes to, extends, or challenges what’s already known. Third, it helps
justify the importance of the study and provides a comparative framework
for analyzing and interpreting the eventual findings.

There are many legitimate reasons to include a literature review: integrating


prior research, offering a critique of past studies, building bridges between
related fields, or identifying central themes. In most academic contexts,
especially dissertations and theses, the review is expected to integrate and
organize the literature thematically—from general to specific, from broad
theoretical perspectives to narrow methodological concerns—until it
logically leads to the gap your study will fill.

The way the literature is used depends heavily on the research approach. In
qualitative research, literature is used inductively. Since qualitative studies
often aim to explore new or poorly understood phenomena, they avoid
imposing theoretical frameworks too early. Instead, the literature might be
used sparingly in the introduction to justify the problem but will be expanded
later to compare findings. Sometimes in theory-driven qualitative
designs—like ethnography or critical theory—literature is introduced earlier
to establish a conceptual framework. In grounded theory, case studies, or
phenomenology, the literature is often held back until after findings emerge,
to allow participants’ voices and experiences to shape the analysis. The
chapter outlines three placement models for literature in qualitative studies:

1. At the beginning to frame the problem;

2. In a separate section toward the start (more common in theoretical


designs);

3. At the end, where the findings are compared with existing


literature—especially in grounded theory.

In quantitative research, the literature is used deductively. It sets the stage


for the research by summarizing theories, supporting the need for specific
research questions or hypotheses, and offering a clear theoretical foundation.
The literature review usually appears as a separate section, often titled
“Review of the Literature,” and is structured around the variables involved
in the study—independent, dependent, and those showing relationships
between the two. The review is analytical, cumulative, and establishes what
is known, what is unclear, and what this study will address. It is also
revisited at the end of the study to compare results with past findings.
In mixed methods research, the use of literature depends on the dominant
phase (qualitative or quantitative), the design type (e.g., sequential or
convergent), and the audience. If a study begins with a quantitative phase,
the review will resemble that of a quantitative study. If it starts with
qualitative data, the review will be lighter in the beginning and richer in
interpretation at the end. In convergent designs—where both forms are
equally weighted—the literature can be used in flexible ways depending on
what best suits the balance.

The authors offer specific advice:

● In qualitative studies, use literature sparingly at first, unless your


design requires early theoretical framing.

● In quantitative studies, use literature deductively to frame the entire


structure of the study.

● In mixed methods, align the literature review with whichever method


is dominant, and make the purpose of the literature explicit.

CONDUCTING A LITERATURE REVIEW:

To begin a literature review is not to open Google and read casually. It is a


deliberate, systematic, and cumulative process of gathering, organizing,
and synthesizing existing research to understand where your study fits. The
first step in this process is to identify key words—those central terms,
themes, or variables that define your study. These key words come either
from your topic formulation or your early readings. For example, if your
study is about “the effects of teacher feedback on student motivation in
Moroccan high schools,” your keywords might be “teacher feedback,”
“student motivation,” “secondary education,” “Morocco,” and maybe
“pedagogical practices.”
Once you’ve selected these keywords, you move into searching academic
databases. This is not like searching for news or general knowledge—it’s
about accessing scholarly publications, usually peer-reviewed, that have
academic credibility. The chapter lists many databases, each with a specific
disciplinary focus:

● ERIC: Education-focused, ideal for teachers, administrators, and


education researchers.

● PsycINFO: Focused on psychology and human behavior—good if your


study has psychological dimensions (like motivation, emotion,
cognition).

● Google Scholar: Broad, fast, and accessible, but less selective.

● PubMed: For health and life sciences, including behavioral studies.

● ProQuest, JSTOR, and EBSCO: Large, subscription-based platforms


offering access to thousands of journals and dissertations.

At this point, you’re aiming to find around 50 relevant sources. You won’t
use all of them, but this is a good number to work with as you start
evaluating. As you read through the titles and abstracts, you begin to narrow
down to the most central and relevant studies—those that directly
contribute to your understanding of your own research problem. This requires
skimming, then deep reading, and evaluating how each article relates to
your topic, either by supporting, challenging, or expanding it.

As you gather sources, you are not simply making a list—you begin building
a literature map. A literature map is a visual diagram that shows the
relationship between different clusters of literature. It helps you organize
your review thematically. The top of the map might represent your broad
research topic (e.g., “student motivation”), then branches might break into
themes like “feedback and motivation,” “cultural context of learning,”
“gender differences in response to feedback,” and so on. These visual maps
are not included in every thesis, but they are essential tools for your own
thinking—they help you identify where your study will contribute something
new.

The next step is to begin writing summaries and abstracts of each important
article. These are not just for citations—they are short analytical notes that
record:

● The problem being studied

● The aim or purpose of the research

● The sample and methods used

● The key findings

● Any flaws or limitations

● How the study relates to your own

The goal is to create a database in your own mind of what has already been
studied, how it was studied, and where the gaps are. For instance, you might
notice that several studies examine feedback and motivation, but none focus
on Moroccan students, or none look at the impact of gender, or that all were
done in university contexts rather than high schools. These observations
justify the gap your study will fill.

WRITING THE LITERATURE REVIEW


Once you've read, summarized, and mapped the literature, the next task is to
actually write your review. This is where many students panic—because they
don’t know how to organize the material. This chapter provides a clear
model. The literature review is not just a pile of summaries. It is a narrative
synthesis that explains:

● What is already known about the topic

● What is debated or contradictory

● What is unknown or underexplored

● And where your research fits in

For quantitative or mixed methods studies, there is a particularly helpful


structure: divide your literature review according to the variables in your
study. This model includes five main components:

1. Introduction – where you tell the reader how your literature review is
structured.

2. Topic 1 – the literature related to your independent variable (e.g.,


“teacher feedback”).

3. Topic 2 – the literature related to your dependent variable (e.g.,


“student motivation”).

4. Topic 3 – studies that examine the relationship between the two


variables, if any exist.
5. Summary – a conclusion that highlights themes, identifies gaps, and
states what your study will contribute.

This model works very well because it aligns directly with your research
questions or hypotheses. It helps reviewers see that you have covered all
theoretical bases and that your study is both logical and needed.

For qualitative studies, the literature review might not follow a


variable-based format. Instead, it’s usually organized thematically—around
issues, perspectives, or theoretical frameworks. You might group your review
into:

● Historical context

● Cultural perspectives

● Studies on the phenomenon in different settings

● Methodological insights

You might place your literature review in the introduction, as a standalone


section, or at the end—depending on your methodology and your advisor’s
expectations.

For mixed methods research, the structure of the literature review depends
on the dominant strand. If your study begins with a quantitative phase, you
follow the variable-based model. If it begins with qualitative data collection,
you follow the thematic model. If your study is truly convergent, you will
need to blend both—possibly using parallel subsections.
ABSTRACTING STUDIES: HOW TO READ DEEPLY, THINK
CRITICALLY, AND RECORD STRATEGICALLY

Once you've gathered a set of relevant studies for your literature


review—through database searches, journal collections, or references from
other works—the next stage is not simply to read them, but to extract,
abstract, and synthesize their most critical elements. This process is known
as abstracting studies. An abstract is not merely a copy-paste of the paper’s
own summary—it is your analytical and focused synthesis of what that
paper contributes to your specific research problem.

When you abstract a study, you’re answering specific questions: What


problem is the article addressing? What is the central focus or objective of the
study? Who are the participants or sample? What method was used to collect
and analyze data? What were the key findings? And finally, in relation to
your own work—how is this study useful, flawed, or relevant?

A well-written abstract summary should make it possible for any


reader—your supervisor, a journal reviewer, or even your future self—to
immediately grasp the structure and value of the study. This becomes
extremely helpful when you're dealing with dozens or even hundreds of
sources and need to remember their significance and limitations later when
writing your formal review. Abstracting also forces you to analyze critically,
not just read passively. If you're reviewing an empirical study, you should
identify flaws in research design, theoretical gaps, limitations in sample or
setting, and other potential weaknesses. If you're reviewing a conceptual or
typological piece, you analyze its argument structure, its relevance to your
theory, and its usefulness as a framework.

For example, let’s say you read a quantitative study that tests whether daily
reading practice improves vocabulary retention in ESL learners. Your abstract
might include: the research problem (limited vocabulary acquisition in ESL
settings), the aim (to test whether daily reading improves outcomes), the
participants (120 Moroccan high school students), the methods (pre/post
vocabulary tests, control group, statistical analysis using t-tests), the findings
(students in the reading group showed significantly higher gains), and finally,
how this relates to your own study (perhaps you are studying writing fluency,
so this study supports the importance of literacy exposure, even if the focus is
slightly different).

When it comes to non-empirical work, such as a theoretical article or a


literature synthesis, the approach is slightly different. You don’t look for
results, but instead focus on: the problem the article discusses, the central
theme, the argument structure, and the implications of the theory. You may
also comment on how this theory supports or challenges other frameworks
you're exploring.

The ability to abstract well is the intellectual backbone of a literature


review. If your abstracts are vague, inaccurate, or shallow, your review will
collapse under weak evidence. But if your abstracts are detailed, critical, and
insightful, they become the building blocks of a powerful argument that
justifies and frames your own study.

USING STYLE MANUALS: MAINTAINING CONSISTENCY AND


ACADEMIC INTEGRITY

As you write your literature review, it's not just about what you say, but how
you say it—and most importantly, how you cite your sources. Academic
writing follows strict guidelines to ensure that references are handled
ethically, consistently, and professionally. This is where style manuals
come in. These are official guides that tell you how to format your references,
how to structure your headings, how to handle quotations, and how to present
tables, figures, and in-text citations.

In the social sciences, the most commonly used manual is the APA Style
Manual (currently in its 7th edition). It is the gold standard for research in
education, psychology, and the broader social sciences. It governs everything
from how you write author names in the text (e.g., “Creswell & Plano Clark,
2018”), to how you structure your reference list, to how you write up tables,
footnotes, and even how you use bias-free language.

Why is using a style manual so important? First, it ensures that your reader
can easily locate your sources and verify your references. Second, it avoids
plagiarism—unintentional or otherwise—by ensuring every piece of
borrowed information is clearly documented. Third, it signals your academic
maturity: sloppy citations, inconsistent formatting, or invented styles suggest
inexperience and reduce your credibility.

The authors of the chapter recommend several specific practices: make sure
all in-text citations are properly reflected in your reference list; make sure
your headings follow the required levels (APA has five heading levels, each
with its own formatting); decide early on how many levels your project will
need; and be consistent throughout. Also, you must decide where to place
footnotes (though they are less used today), and pay close attention to how
tables and figures are labeled and formatted.

DEFINITION OF TERMS: ACHIEVING PRECISION, CLARITY,


AND CONSISTENCY

In every research study, there are key terms—specialized, technical, or


field-specific words that need to be clearly defined to avoid confusion or
misinterpretation. This is especially true in formal writing, such as theses or
journal articles, where clarity is essential. The definition of terms section,
which may appear early in the introduction or as a separate section, ensures
that your readers understand exactly what you mean when you use terms
that may have multiple interpretations.

But defining terms is not as simple as grabbing a dictionary definition. In


research, we often require operational definitions—terms that are defined in
the context of how they are measured or understood within your study.
For example, if you’re studying “academic success,” you must define
whether that refers to GPA, exam scores, self-perceived learning, or
something else. You cannot assume the reader knows what you mean.

In quantitative research, where variables must be precisely measured, the


definition of terms is done early, often in a clearly labeled section.
Researchers define every variable—both independent and dependent—using
standard definitions from the literature or, when necessary, customized
operational definitions. This ensures that the data collected aligns with the
theoretical construct being studied.

In qualitative research, definitions may be more tentative and evolve


throughout the study. Because qualitative research is inductive and
open-ended, researchers often define terms as they emerge from the
field—in interviews, observations, or participant narratives. In grounded
theory or ethnography, for example, definitions are co-constructed through
the interaction between the researcher and the participants. Therefore, it’s
common in qualitative proposals to include only tentative definitions at the
beginning, with fuller definitions provided later in the findings or discussion.

In mixed methods research, definitions depend on the dominant approach. If


the first phase is quantitative, definitions are precise and set early. If it begins
with qualitative work, definitions are likely to be more flexible and
developed as the study unfolds. Either way, a separate section may still be
included for defining unfamiliar concepts like “convergent design” or
“transformative framework.”

Overall, definitions are not academic fluff—they are tools of scientific


clarity and control. Without them, your readers could misinterpret your
research questions, misunderstand your methods, or misread your
conclusions. A well-written definition section demonstrates your command of
your field and your commitment to intellectual precision.
STRUCTURING A LITERATURE REVIEW IN QUANTITATIVE
AND MIXED METHODS RESEARCH

When you are writing a literature review for a quantitative or mixed


methods study, your task is not only to summarize what others have done,
but to carefully link past research to the variables in your own study. In this
type of review, the literature is not just background—it is the theoretical
justification for your research questions or hypotheses. That means every
section of your review must help set up the argument for why your study is
both logical and necessary.

The structure recommended here is based on the logic of variables.


Specifically, it follows the same structure that your study will: independent
variables, dependent variables, and then studies that look at the relationship
between the two. This creates a clear, compelling pathway from what is
already known, to what is not yet known (and what you’re about to
investigate).

The model contains five key sections:

1. Introduction to the Literature Review: This section is more than a


simple paragraph. You explicitly tell the reader how the literature
review is structured. This is like a roadmap. For instance, you might
write, “This literature review addresses three areas: first, the literature
on teacher feedback (independent variable); second, the literature on
student motivation (dependent variable); and third, studies that examine
the relationship between the two. The review concludes with a
synthesis that identifies gaps and establishes the need for the present
study.” This opening sets up the reader’s expectations and shows you
are approaching the literature in a logical, academic way.

2. Review of Topic 1: The Independent Variable(s): This section


focuses solely on the independent variable, or what you will
manipulate or categorize in your study. For example, if your study
examines the impact of cooperative learning on student performance,
this section reviews literature on cooperative learning. If you have
more than one independent variable, you may create subsections. The
key is to review how this variable has been studied, what definitions
and measurements have been used, what theories have supported it, and
what gaps remain.

3. Review of Topic 2: The Dependent Variable(s): This section now


reviews literature on the outcome variable—what your study is
measuring. Continuing the earlier example, if your dependent variable
is “student motivation,” you would review literature on how motivation
has been defined, measured, and influenced in prior studies. Again, if
there are multiple dependent variables (e.g., motivation, test
performance, satisfaction), you may break this into subsections. Your
goal here is to demonstrate that you know the theoretical and
empirical terrain surrounding your outcome.

4. Review of Topic 3: Literature on the Relationship Between


Variables: This is the crux of the literature review, because it shows
how much (or how little) is already known about the specific
relationship you want to test. Ideally, this section should narrow the
focus of the review to studies that are very close to your own. For
instance, if you are studying how teacher feedback influences
motivation among Moroccan high school girls, you look for studies that
examine feedback and motivation—especially in similar cultural or
educational contexts. If no such studies exist, you must show this
explicitly and justify your study as filling that gap.

5. Summary and Synthesis: This is not a simple recap. You must


synthesize the major themes, contradictions, and unanswered questions
that emerged across all three topic sections. You explain what the
literature tells us so far, what it doesn’t tell us, and how your study is
positioned to add something new and valuable. This section should
also point forward to your methodology: if the literature shows
confusion about how motivation is measured, for example, that may
justify your choice of a particular instrument or scale. This closing
section demonstrates that you’re not just reporting knowledge—you’re
creating a logical bridge from past research to your own.

This model is particularly effective because it forces you to delimit your


study clearly. You don’t wander into irrelevant tangents. You focus only on
what supports your specific research question and variables, which makes
your study’s contribution precise, defensible, and academically sound.

LITERATURE MAPS: A VISUAL TOOL FOR THINKING AND


ORGANIZATION

A literature map is a conceptual diagram—a figure—that helps you


visualize how the various bodies of research you’ve read are organized and
how your study fits into the larger field. It is not required in every thesis, but
it is one of the most powerful tools you can use in your planning and
thinking process. A literature map is not just decoration; it is a mental
structure, and creating one forces you to make difficult but important
decisions about how the literature is organized, where it overlaps, and where
the gaps are.

There are multiple ways to structure a literature map:

1. A hierarchical map, where the topic is at the top, followed by major


themes or categories, then subthemes, and finally your own study at the
bottom.
2. A flowchart, where you move from left to right, showing how different
areas of literature flow into or inform your research topic.

3. A Venn diagram, where overlapping circles represent bodies of


literature, and the intersecting area shows where your study contributes.

What’s important is not the shape, but the clarity of structure. A good
literature map shows:

● What major categories of literature exist on your topic

● How those categories are divided into subthemes

● What studies represent each category (with citations)

● Where your proposed study is located

● Which categories your study builds upon or contributes to

Let’s say your topic is about “digital literacy and gender in Moroccan
universities.” Your top-level categories might be:

● Digital literacy

● Higher education pedagogy

● Gender and technology


Each category might branch into specific subtopics (e.g., “Digital
literacy among teachers,” “Access to digital tools by gender,” etc.). At
the bottom of the map, you place a box for “Current Study,” and draw
lines to all the branches of the literature that it is building on.

You might begin with 25 sources for a preliminary map. For a full thesis or
dissertation, you may be working with 100 or more. It takes time and
multiple drafts to get the structure right. You’ll need to ask: Does my map
clearly show where the conversation is, and what is missing? Do the
branches reflect actual patterns in the literature, or just categories I imposed
arbitrarily? Which part of the map does my study most directly link to—and
how?

In presentations, in thesis proposals, and even in journal articles, a literature


map communicates your intellectual framework visually, and helps your
reader understand your scholarly positioning immediately.

FINAL ADVICE ON PRIORITIZING, EVALUATING, AND


STRUCTURING LITERATURE

The final parts of the chapter offer comprehensive advice for researchers
trying to manage which literature to use and how to structure it. The
authors propose a priority system:

● Start with refereed journal articles, because they are peer-reviewed,


credible, and current. These should form the core of your review.

● Then look at books, especially research monographs or scholarly


volumes on specific topics.

● Conference papers come next—they’re good for finding emerging


ideas, though not always rigorously reviewed.
● Dissertations should be used cautiously—quality varies, and they’re
often hard to access.

● Web sources should be used with the highest caution—unless they’re


online journals with editorial boards, avoid them or use them only to
supplement.

The final section also walks you through the process of writing your review.
The key is to group sources thematically, write clear transitions, and avoid
listing studies like a laundry list. You should always write with a purpose: to
show patterns, debates, gaps, and justify the need for your research. This
review sets the stage for your method, your questions, and your theoretical
framework.

CHAPTER 3 – THE USE OF THEORY

At the heart of any scholarly research project lies a core intellectual structure:
theory. Theory isn’t just a background idea or an optional element—it is a
central guiding force that influences everything from how we formulate our
questions to how we interpret our findings. This chapter begins by
establishing that the role of theory varies significantly depending on whether
your study is quantitative, qualitative, or mixed methods, and part of being
a skilled researcher is knowing how to use theory appropriately within your
chosen approach.

In quantitative research, theory plays a deductive role. Researchers begin


with an existing theory, from which they derive hypotheses—clear, testable
predictions. The entire structure of the study, from the selection of variables
to the methods of data collection and analysis, is designed to test whether the
real-world evidence supports or refutes that theoretical expectation. A typical
quantitative dissertation might devote a full section to explaining the broader
theoretical model that frames the study. This model offers a rationale for the
hypotheses and a logical structure for the design and interpretation of
findings.

In qualitative research, the relationship with theory is far more flexible and
varied. In some cases—especially in grounded theory—the theory is not
imposed at the start but is instead generated as an outcome of the study. The
researcher goes into the field with an open mind, collects data, and lets
patterns, themes, and relationships emerge inductively. But in other
qualitative traditions, such as ethnography or critical theory-based
research, theory may play a more explicit role at the beginning, helping
shape what is observed, what questions are asked, and how the data is
analyzed. In social justice-oriented qualitative studies, theory serves as a
lens, one that guides the researcher to focus on power, marginalization, and
inequality. This lens is not neutral—it’s explicitly political and ethical,
aimed at transformation.

In mixed methods research, both roles are in play: researchers may begin
with a theory they want to test quantitatively, while also generating themes
and explanations qualitatively. Furthermore, mixed methods research might
use a theoretical framework—often drawn from social science or
participatory traditions—that integrates both quantitative and qualitative data
collection and interpretation. These frameworks can be disciplinary (like
economic or psychological theories) or critical (like feminist, racial, or
disability-based frameworks), and they give coherence to studies that are
often very complex in their design.

So, the chapter opens by saying: before you go into methods or data, you
must understand the theory—because theory affects everything. And what
follows is a detailed exploration of how that works in each paradigm.

QUANTITATIVE THEORY USE

Understanding Causality in Quantitative Research


Before we even talk about variables or theories in the abstract, the chapter
begins with a crucial foundational point: in quantitative research, we are
often not just looking at relationships between things—we are trying to
establish or test causal claims. That means we are asking questions like:
Does X cause Y? Not just “Are X and Y related?” but “Is X the actual reason
Y happens?” That distinction is everything in quantitative work.

To illustrate this, the authors present a simple example: the question of


whether drinking one glass of red wine daily causes a reduced risk of heart
attack. In this case, “drinking red wine” is your independent variable
(X)—the variable you think might be influencing something else. The
dependent variable (Y) is “heart attack”—the thing you want to explain,
predict, or reduce. This structure, where we think X affects Y, is the essence
of causality.

But—and here’s where it gets critical—the problem is that sometimes, what


looks like a causal link between X and Y is actually due to a third, hidden
factor: a confounding variable (Z). In the red wine example, maybe people
who drink red wine moderately also tend to exercise regularly—and maybe
it’s the exercise, not the wine, that reduces heart attack risk. If you don’t
account for this third variable, you could wrongly conclude that wine is the
cause, when it’s not.

That’s why true experiments are considered the gold standard in quantitative
research. Only by randomly assigning participants to groups (e.g., one
group drinks red wine, another doesn’t) can we control for these confounding
variables. Experiments give us control, which is necessary to make stronger
claims about causality.

However, if you can’t conduct a true experiment—maybe it’s not ethical, or


not feasible—you can use survey research instead. Surveys don’t establish
causality as strongly, but they can help you test associations or correlations
between variables. You might find, for instance, that there’s a statistical
relationship between wine consumption and reduced heart attack risk, but
without experimental control, you can’t prove the cause. In that case, you
would be testing non-causal hypotheses, like “X is associated with Y,”
rather than “X causes Y.”

This distinction is essential: experiments test causation, surveys test


association.

Defining Variables in Quantitative Research

Now that the idea of causality is clear, the chapter moves into discussing the
types of variables—because theory in quantitative research is built around
how variables relate to each other.

A variable is anything that varies—anything you can measure or observe and


that changes from one case to another. Variables can be demographic (like
age or gender), psychological (like attitudes or motivation), or behavioral
(like performance, aggression, or attendance). But what matters most in
research is how we classify variables based on their function in the study.

The first and most fundamental type is the independent variable. This is the
variable that the researcher believes is causing or influencing something
else. In experiments, it is the variable you manipulate. In our wine example,
daily red wine consumption is the independent variable—you’re changing it
to see what effect it has.

Next is the dependent variable, which is the outcome or effect. It is the


thing you’re measuring to see if it changes as a result of the independent
variable. In this example, the dependent variable would be heart health (as
measured by heart attack incidence, stroke risk, cholesterol levels, etc.).

In survey studies, we use slightly different language. Because we’re not


manipulating anything, we talk instead about predictor variables and
outcome variables. A predictor variable is like the independent
variable—it’s a variable you think might explain or predict something else,
but you're not manipulating it. For instance, if you just survey people on how
much wine they drink and compare it to their health records, “wine
consumption” is a predictor variable, and “heart attack risk” is an outcome
variable (also called a criterion variable).

But it doesn’t end there. There are also two other kinds of variables that help
researchers understand complex relationships:

● Mediating variables (or intervening variables) are those that explain


the relationship between the independent and dependent variable. Think
of them as middle variables—they carry the effect from X to Y. In the
red wine example, it could be that red wine increases polyphenol levels,
and those polyphenols, not the wine itself, are what reduce heart
disease. In that case, polyphenol levels are a mediator.

● Moderating variables are different. They don’t lie in between—they


affect the strength or direction of the relationship between X and Y.
Let’s say red wine reduces heart attack risk more for men than for
women. In this case, gender is a moderator—it changes how strong
the X-Y relationship is.

Understanding these types of variables is crucial for designing your study and
choosing the right theory. If your theory says X causes Y through M, then M
is a mediator. If your theory says X causes Y but only under certain
conditions, those conditions are moderators. Identifying these kinds of
variables helps you make more nuanced, accurate, and useful models.

Hypotheses and the Role of Theory


Now we come to the bridge between variables and theory: hypotheses. A
hypothesis is a testable prediction about the relationship between variables.
In a strong quantitative study, you don’t just pick variables at random—you
choose variables because theory tells you they are related. That theory
becomes the foundation for your hypotheses.

For example, let’s say you’re studying leadership in organizations. A theory


might tell you that the more centralized the leadership, the more
disempowered the followers feel. So you create a hypothesis: “The greater
the centralization of power, the greater the disenfranchisement of members.”
That hypothesis, if supported by data, helps confirm the theory. If not, the
theory may need to be revised.

Here’s the key: theory is not just an idea—it’s a logical, structured system
of propositions that explain how and why variables are connected. And
over time, as hypotheses are tested and retested, these theoretical systems
become stronger, more refined, and more reliable.

DEFINITION OF A THEORY IN QUANTITATIVE RESEARCH

What a Theory Really Is—and Why It Matters

To build a strong quantitative study, you don’t just need a list of variables.
You need to explain why you believe these variables are related, and why
you think your independent variable will influence your dependent one. That
explanation comes from theory. But what exactly is a theory?

The chapter uses Kerlinger’s (1979) classic definition to ground this idea: a
theory is “a set of interrelated constructs (variables), definitions, and
propositions that presents a systematic view of phenomena by specifying
relations among variables, with the purpose of explaining natural
phenomena.” In other words, a theory is like a carefully built framework of
variables and ideas that describe how things work in the world and why they
work that way.

Let’s break this down. A theory is:

● A set of constructs or variables (these are the measurable parts of


your study);

● It includes definitions (clear meanings of the variables);

● It contains propositions (statements about how those variables relate to


each other);

● And it is systematic (structured, logical, organized);

● Its purpose is to explain or predict real-world phenomena.

For instance, imagine a theory in education that explains why some students
are more motivated than others. The theory might say that autonomy,
competence, and relatedness are psychological needs, and that satisfying
these needs leads to higher motivation. This is not just a story—it’s a theory
with named variables, relationships among them, and an explanation for an
outcome. That’s what makes it testable.

The theory might appear in your study in many forms—it could be a section
in your literature review, a visual diagram, a set of hypotheses, or a separate
chapter titled Theoretical Framework. But in every case, its purpose is to tie
the variables together and justify why you expect a certain pattern or result.

The authors offer a useful metaphor: the theory is like a rainbow. It


connects the independent and dependent variables. Just like a rainbow arches
over two points and connects them in a visible arc, a theory provides the logic
that connects X and Y. It tells the reader: here’s why I believe X affects Y.

For example, suppose you're studying whether social isolation increases


anxiety in university students. Your theory might say that humans have a
fundamental need for social belonging (a proposition from Maslow’s
hierarchy of needs). If that need is unmet, psychological stress increases.
This theory justifies your choice of variables, the direction of the hypothesis
(more isolation = more anxiety), and the relevance of your research question.

But theories don’t just appear overnight—they are developed over time
through repeated testing. Researchers generate hypotheses (such as “high
isolation leads to high anxiety”), test them in different settings (colleges,
workplaces, different age groups), and gradually build a body of evidence.
When these results converge, the theory gains credibility. Eventually,
someone names it, publishes it, and others use it in new contexts. That’s how
theory becomes part of the field’s common knowledge.

Theories also vary in scope and level. Some are very narrow, explaining
only small patterns in specific situations—these are called micro-level
theories. For instance, Goffman’s theory of “face work” is a micro theory
that explains behavior in face-to-face interactions. Some theories are
broader—meso-level theories—that apply to communities, institutions, or
organizations. Others, like macro-level theories, apply to entire societies or
systems. For example, Lenski’s theory of social stratification explains how
surplus production affects the organization of society. Knowing the scope of
your theory helps you judge whether it’s appropriate for your research focus.

So, to summarize this section in one deep insight: a quantitative theory is


not just background information—it is an organized, testable structure that
connects your variables and gives your study intellectual coherence. Without
it, your study is like a house without a foundation—it might look complete,
but it won’t stand up to critical scrutiny.
FORMS OF THEORIES IN QUANTITATIVE RESEARCH

The first point made here is that researchers don’t always express theory the
same way. Depending on the tradition they come from, the kind of research
they’re doing, and their own style, they might write theory as a network of
hypotheses, as a narrative argument using cause-effect logic, or as a
diagram or visual model. All three are legitimate, and in many studies,
researchers combine them for clarity.

1. Theory as Interconnected Hypotheses

In this form, the researcher expresses the theory as a list of


hypotheses—specific predictions about relationships between variables.
These hypotheses are not made up on the spot—they are drawn from the
logic of an existing theory and reflect the theory’s assumptions about how
variables relate to one another.

The chapter gives an example from Hopkins (1964), who articulated a theory
of influence processes using 15 separate hypotheses. The structure is
elegant and tight—each hypothesis builds logically on the others, and the
entire collection becomes a system of interrelated propositions.

Let’s look at a few examples from that set:

● “The higher one’s rank, the greater one’s centrality.”

● “The greater one’s centrality, the greater one’s observability.”

● “The higher one’s rank, the greater one’s conformity.”

● “The greater one’s observability, the greater one’s conformity.”


Each of these is a clear, directional statement that links two variables (e.g.,
rank and centrality, centrality and conformity). Together, they tell a story:
people in high-ranking positions become more central, more visible, and
therefore conform more. That’s a theory, written entirely through hypotheses.
This format is especially useful when the study has multiple variables and
the researcher wants to show the network of relationships among them. It’s
also common in psychological or organizational research, where variables
are often tightly defined and interdependent.

This form is very efficient and makes the theory easy to test. However, it can
be dense and technical, especially for readers who aren’t experts. That’s
why some researchers supplement it with narrative explanations or
diagrams.

2. Theory as If–Then Logical Statements

This second form expresses theory as conditional logic: If X happens, then Y


will follow. It is one of the clearest ways to express causal reasoning, and
it’s grounded in positivist scientific writing.

The chapter draws from Homans (1950), a major figure in sociological


theory, who wrote:

“If the frequency of interaction between two or more persons


increases, the degree of their liking for one another will increase,
and vice versa…”

This kind of statement is powerful because it not only tells us that two
variables are related, but how the relationship works. The logic is
symmetrical, reversible, and testable. It also opens the door to mediating
and moderating conditions—for example, we could ask: “Does this hold true
only in certain cultural settings?” or “Is there a point where more interaction
reduces liking?”
What’s important here is that the if-then format lays out the underlying
assumptions of the theory clearly. It also helps you translate theory into
hypotheses, because hypotheses are often just specific if-then statements
with measurable variables attached.

For example, a theory might state:

If students receive timely, constructive feedback, then their writing


performance will improve.
From that, we derive the hypothesis:
Students who receive feedback within 48 hours will score higher
on writing assessments than those who receive delayed feedback.

So, the theory gives us the logic, and the hypothesis gives us the testable
prediction.

3. Theory as a Visual Model

This third form of theory is visual—the researcher creates a diagram that


shows how variables are connected. This is especially helpful in studies with
many variables, or where the relationships are complex (e.g., with mediators
or moderators). The visual model allows the reader to see the architecture of
the theory: where the variables are, how they flow, and what influences what.

The chapter mentions Blalock (1969, 1985, 1991) as an advocate of causal


modeling, where verbal theories are transformed into path diagrams. These
models are built with rules:

● Independent variables go on the left.

● Dependent variables go on the right.


● Arrows show the direction of influence.

● Valence signs (+ or –) indicate whether the relationship is positive or


negative.

● Double-headed arrows show correlations that are not part of the


causal pathway.

This is the foundation of what we now call structural equation modeling


(SEM)—a highly advanced form of statistical modeling.

The chapter presents an example (Figure 3.1) in which three independent


variables influence one dependent variable, with two mediating variables
in between. This setup shows a classic causal chain: the independent
variables affect the mediators, which in turn affect the outcome. It’s
essentially a theory laid out like a flowchart, and it’s ideal for studies that aim
to test complex causal relationships using regression analysis, path
analysis, or SEM.

Another simpler version (Figure 3.2) shows a between-groups experimental


design: two groups (experimental and control) are compared on one outcome.
The diagram illustrates how treatment (X) affects outcome (Y). Even this
simple model helps clarify the logic of the design.

The most complex diagram (Figure 3.3) is taken from Jungnickel’s


dissertation on faculty research productivity. He built a model that includes
multiple independent, mediating, and dependent variables, organized
from left to right, with plus and minus signs indicating expected
relationships. It’s a real-world example of how theory guides a full research
design.
What these diagrams make visible is the structure of the theory. You can
literally see how the independent variables are expected to work, how
mediators function, and how the dependent variable is supposed to change.
This helps not only the reader, but the researcher—because drawing a
diagram forces you to clarify your thinking.

PLACEMENT OF QUANTITATIVE THEORIES

Where Theory Belongs in a Quantitative Study

When you use theory in a quantitative study, you’re not just including it for
decoration—you’re using it deductively. That means the entire logic of your
research flows from the theory. You begin by stating a theory, then you
develop hypotheses or research questions from it, and then you collect and
analyze data to test those predictions. So where should that theory be
placed?

The general rule is: place the theory early in your research plan. That means
it should appear before you present your hypotheses or research questions,
because the theory is what justifies and explains why you are asking those
questions. Some researchers place it in the introduction, others in the
literature review, and some in a separate section titled "Theoretical
Framework" or "Theoretical Perspective." The chapter recommends this last
option—using a clearly labeled, stand-alone section—because it gives the
reader a chance to clearly distinguish the theory from the background
literature, variables, or hypotheses. In formal academic proposals, this makes
your theoretical thinking transparent and focused.

Now, think of this in terms of structure. You might write your introduction by
first describing the problem, followed by a brief review of relevant literature,
and then immediately after that, introduce your theoretical perspective. This
becomes your organizing framework—it tells your reader what to expect,
what variables are involved, and why you believe these variables are
connected. The theory becomes the backbone of your argument, helping
you transition smoothly into hypotheses, variable definitions, and
measurement tools.

To visualize this process, the chapter presents Figure 3.4, which outlines the
deductive approach. It starts with a theory, breaks that theory down into
hypotheses or research questions, defines the variables, then finds or builds
instruments to measure those variables, and finally collects and analyzes data
to confirm or refute the theory. This flow is linear, logical, and testable—the
defining characteristic of quantitative research.

In summary: In quantitative studies, you use theory to build a case. And you
must state it clearly, early, and explicitly—either in the introduction, right
after the research questions, or in a standalone theoretical section. That theory
sets up everything that follows.

WRITING A QUANTITATIVE THEORETICAL PERSPECTIVE

How to Write the Theory Section in a Proposal

Now that we know where the theory goes, how do you actually write it?

The chapter provides a structured model for writing a theory section in a


research proposal. It even offers exact sentence prompts you can use and
adapt to your own topic. The goal is to make sure that you not only name the
theory, but also explain its origin, identify its core hypotheses, show
where it's been used, and connect it clearly to your study.

Here’s the full recommended structure:

1. Identify the Theory: Start by naming the theory you will use. For
example: “The theory that I will use is social learning theory.”
2. Give the Origin: Explain where the theory comes from, who
developed it, and in what context. This shows you’ve done your
background reading. Example: “It was developed by Albert Bandura
and used to study human learning in educational and clinical contexts.”

3. Summarize Key Propositions: State the theory’s central claims. What


does the theory say? What are its major hypotheses or principles? This
part helps the reader understand the theory’s inner logic.

4. Connect the Theory to Your Variables: This is where you apply the
theory to your own study. You must show how the theory justifies the
relationship between your independent and dependent variables. This is
the "rainbow bridge"—how X explains or predicts Y.

5. Justify the Logic: Explain why you expect your independent


variable(s) to influence your dependent variable(s), according to the
theory. What mechanism does the theory suggest?

Here is the full template script from the chapter:

“The theory that I will use is [name the theory]. It was developed
by [origin/source], and it was used to study [topic]. This theory
indicates that [propositions/hypotheses]. As applied to my study,
this theory holds that I would expect my independent variable(s) [X
variables] to influence or explain the dependent variable(s) [Y
variables] because [rationale based on the logic of the theory].”

This script can be adapted to any theory and research question. It forces you
to be clear, disciplined, and precise. It also ensures that your study is
anchored in the larger academic conversation.
The chapter then provides an example from a real dissertation written by
Crutchfield (1986). Her study looked at whether locus of control and
interpersonal trust affected scholarly productivity among nursing faculty.
She used social learning theory as her theoretical framework. In her writing,
she:

● Named the theory and gave its background (Bandura, Rotter, etc.)

● Summarized its main propositions (behavior depends on reinforcement,


expectancy, and context)

● Defined her variables within the theory’s logic (locus of control as


expectancy; rewards as reinforcements; the university as the
psychological context)

● Applied the theory to her study in the form of an if-then logic: If


faculty believe their efforts will be rewarded, and if they trust others
and see the rewards as valuable, then they will produce more scholarly
work.

This example shows how a well-written theory section becomes the


intellectual anchor of your study. It guides your hypotheses, your variable
selection, your measurement tools, and your interpretation of results. Without
it, your study would drift without direction.

THEORY IN QUALITATIVE RESEARCH

The Many Roles of Theory in Qualitative Inquiry

Unlike in quantitative research—where theory is typically fixed, named, and


placed at the beginning to be tested—qualitative research exhibits a diversity
of approaches to theory. Theory may appear as:
● A broad explanatory framework that shapes the questions and
methods;

● A theoretical lens or perspective that reflects the researcher’s critical


orientation;

● An inductively generated result that emerges from analyzing data;

● Or in some cases, no formal theory at all, especially when the goal is


to produce rich description rather than generalization.

So, in qualitative studies, theory can be a starting point, a guiding


framework, a developing tool, or an end product—depending on the
researcher’s purpose, worldview, and methodology.

First Role: Theory as Broad Explanation (Cultural Themes)

The first way theory enters qualitative research is similar to its use in
quantitative work: as a broad explanation that helps the researcher
understand behavior or structure the study. For example, ethnographers,
who study cultures and communities in their natural settings, often use
cultural theories to frame their inquiries. These theories might focus on
social organization, family structures, language practices, gender roles,
rituals, or systems of power.

The chapter gives the example of health science research, where researchers
often begin with conceptual models related to health behavior—like the
theory of planned behavior, health belief model, or quality of life
frameworks. These are not hypotheses to be tested as in quantitative work,
but orienting concepts that guide what the researcher observes, asks, and
analyzes.
So in this first model, theory provides a ready-made set of ideas or
patterns—a sort of intellectual map—that researchers use to explore
complex cultural or social phenomena. While these might not always be
labeled “theories” in the formal sense, they serve that role.

Second Role: Theory as a Transformative Lens

Here we come to one of the most powerful and politically engaged uses of
theory: the theoretical lens or transformative perspective. This approach
gained strength in the 1980s and 1990s when researchers from feminist,
critical race, queer, disability, and decolonial traditions began using
qualitative research to challenge power structures and give voice to
marginalized groups.

In this model, theory doesn’t just explain the world—it questions it,
interrogates it, and aims to transform it. Researchers who use these lenses
are not “neutral observers.” They position themselves as active, engaged
scholars who are asking political and ethical questions: Who has power?
Who gets to speak? Who has been silenced? What systems sustain
inequality?

Each lens has its focus:

● Feminist theory examines how women’s lives are shaped by structural


inequality, patriarchy, and cultural representations.

● Critical race theory interrogates how race, racism, and colonial


legacies shape knowledge and experience.

● Queer theory challenges normative definitions of gender and sexuality


and centers the experiences of LGBTQ+ individuals.
● Disability studies reject the idea that disability is a medical defect and
instead explore how society disables people through inaccessible
structures and discriminatory attitudes.

When researchers adopt one of these lenses, it affects every part of their
study: the questions they ask, how they collect data (often collaboratively),
how they analyze it (often through narratives, stories, or themes), and how
they write their results (often alongside participants, rather than about
them). The goal is not just understanding, but empowerment and social
change.

As Rossman and Rallis (2012) argue, critical and postmodern perspectives


fundamentally challenge the assumptions of traditional science: the idea that
knowledge is objective, that the researcher is neutral, or that there is one
truth. Instead, they insist that research is always shaped by power, culture,
gender, race, and politics. Therefore, theory becomes not just a guide, but a
stance—a way of taking responsibility for what research does in the world.

Third Role: Theory as an Outcome (Inductive Theory-Building)

In contrast to the first two approaches—where theory comes at the


beginning—this third model places theory at the end of the study. This is
typical of grounded theory, but also appears in case studies, naturalistic
inquiry, and ethnographic narratives.

Here, the researcher begins with no fixed theory, enters the field with
open-ended questions, collects data, and gradually builds themes, categories,
and patterns. From these, a theory is developed. This is the inductive logic
of qualitative research: from data to patterns to generalizations.

Lincoln and Guba (1985) called the result a “pattern theory”—not a


universal law, but a plausible and grounded explanation of how a social
process works. For instance, a researcher might study how first-generation
university students adapt to academic life. After months of interviews,
coding, and comparison, the researcher may develop a theory of
“navigational capital” that explains how students use family, peer, and
cultural resources to succeed. That theory didn’t exist before the study—it
emerged from the data.

Stake (1995) calls the final product an assertion or naturalistic


generalization—something that readers can apply to their own contexts
through resonance, not statistical inference.

Fourth Role: Studies With No Formal Theory

Finally, there are qualitative studies that do not use any explicit theory at all.
This doesn’t mean they are “atheoretical”—every researcher comes with
assumptions—but in these studies, the goal is often to provide a rich,
textured, descriptive account of a phenomenon without framing it in
theoretical terms.

This is especially true in phenomenology, where researchers aim to describe


the essence of lived experience—what it’s like to experience grief,
motherhood, trauma, or forgiveness. In these cases, the researcher may
bracket out their assumptions, collect deep, open-ended descriptions, and
produce a descriptive narrative that evokes understanding rather than builds
theory.

But as Schwandt (2014) notes, no research is ever theory-free. Even choosing


to describe rather than explain is a theoretical stance. Still, in these studies,
theory is subtle, backgrounded, or postponed, rather than foregrounded.

THEORY USE IN MIXED METHODS RESEARCH

The Dual Role of Theory: Deductive and Inductive in One Study


Mixed methods research is not just about using two types of data—it is about
integrating them in a way that produces insights greater than what either
could provide alone. Because of this, theory in mixed methods often plays a
dual role: it may operate deductively, guiding quantitative hypotheses, while
also functioning inductively, emerging through qualitative data analysis.
Theory in mixed methods therefore has to be flexible, layered, and robust
enough to accommodate different forms of knowledge production.

The chapter identifies two dominant frameworks through which theory


often appears in mixed methods studies:

1. The Social Science Framework

2. The Participatory–Social Justice Framework

These are not “types” of theory in the abstract; they are ways of embedding
theory throughout the design, data collection, analysis, and interpretation
stages of a study. Let’s unpack each one in full.

THE SOCIAL SCIENCE FRAMEWORK IN MIXED METHODS

A social science theory—whether from psychology, economics, sociology,


education, or another field—can serve as the overarching explanatory
model in a mixed methods study. It provides structure, consistency, and
clarity across both quantitative and qualitative components. This theory
might appear as:

● A named theoretical model (e.g., Bandura’s social cognitive theory,


Ajzen’s theory of planned behavior),

● A conceptual framework drawn from prior research,


● Or a set of propositions that guide what you expect to find.

In a well-constructed study, the theory appears early, clearly stated in the


introduction or literature review. It lays out the major variables, their
expected relationships, and their theoretical justification. This helps frame
both your quantitative hypotheses and your qualitative questions.

For example, Creswell and Plano Clark (2011) describe how theory in this
model should:

● Be stated explicitly by name;

● Be connected to prior studies that used it in similar contexts;

● Include a diagram or figure illustrating key variables and


relationships;

● Guide both types of data collection, even if it plays a stronger role in


one phase;

● Be revisited at the end, to interpret findings in light of the theory and


compare your results with previous studies.

One example discussed in the chapter is Kennett et al. (2008), who studied
chronic pain management using Rosenbaum’s model of learned
resourcefulness. This is a cognitive-behavioral theory that explains how
individuals cope with stress and health challenges. The researchers used a
quantitative instrument (Rosenbaum’s Self-Control Schedule) to measure
resourcefulness and also conducted qualitative interviews to explore how
patients experience pain management. The theory appeared at the beginning
of the article and was used to generate questions, analyze patterns, and
explain findings across both methods. The authors also proposed a visual
model at the end of the study showing which aspects of the theory were
supported.

This approach illustrates the power of theory as a bridge between numerical


analysis and lived experience. It brings coherence to the study, anchors it in
existing knowledge, and provides testable predictions as well as contextual
interpretation.

THE PARTICIPATORY–SOCIAL JUSTICE FRAMEWORK IN


MIXED METHODS

The second major way theory is used in mixed methods is through


participatory–social justice frameworks. This is an approach rooted not
only in methodology, but in ethics, activism, and community engagement.
The key advocate of this approach is Donna Mertens, whose work since the
early 2000s has redefined how researchers think about inclusion, power, and
transformation in research design.

At its core, this framework argues that research must:

● Be grounded in ethical stances that prioritize inclusion, equity, and


the dismantling of oppressive structures;

● Enter communities with respect, trust, and transparency;

● Produce findings that are useful to the community, not just academia;

● And collaborate with participants as partners, not objects of study.

This perspective aligns closely with critical theory, feminist theory,


indigenous methodologies, and anti-colonial frameworks. It asks the
researcher to take a stand—to recognize that knowledge is not neutral, that
methods are not apolitical, and that research has consequences.

In mixed methods studies, this framework can shape every stage:

● In the problem statement, it asks: Does this problem emerge from the
community itself?

● In the literature review, it asks: Are voices of oppressed groups


represented?

● In the research questions, it asks: Are we asking questions that


empower rather than pathologize?

● In sampling, it asks: Are we including diverse, marginalized, and


underrepresented groups?

● In data collection, it asks: Are participants treated as collaborators?

● In analysis, it asks: Are we exploring power dynamics? Are we sharing


our interpretations with the community?

● In interpretation and dissemination, it asks: Are we facilitating real


change with these findings?

Mertens (2003, 2009, 2010) lays out specific criteria for how to embed this
framework into mixed methods research. These include ten reflective
questions, such as:

● Did the study benefit the community?


● Did the participants help design the study?

● Did the researchers address oppression, diversity, and inclusion?

● Were the findings presented in ways that challenge inequality and


advocate for change?

The chapter includes Box 3.1, which operationalizes these criteria in relation
to every phase of the research process—from defining the problem to
collecting data to reporting results.

In their review of studies using this approach, Sweetman et al. (2010) found
that while many researchers adopted elements of the transformative
paradigm, few made it explicit. This suggests that the field is still developing
its standards, and that researchers interested in using this framework need to
be especially deliberate and reflective.

To see this approach in action, the chapter presents Hodgkin’s (2008) study,
which used a feminist lens in a mixed methods study of social capital in
Australia. Hodgkin highlighted the absence of gendered perspectives in
traditional studies of social capital, and designed her study to give voice to
women’s experiences. Her research questions were guided by feminist
theory; her quantitative and qualitative findings were integrated through that
lens; and her interpretations focused on women’s social participation,
isolation, and identity. By doing so, Hodgkin demonstrated that theory isn’t
just a conceptual tool—it is a political and ethical stance.

CONCLUSION: Theory Across All Three Approaches

The chapter ends by tying together everything we’ve learned. Whether you
are conducting quantitative, qualitative, or mixed methods research, theory
plays an indispensable role. It can:
● Explain or predict relationships between variables;

● Guide the formulation of hypotheses;

● Offer a lens for seeing the world through issues of power and identity;

● Emerge from the data through inductive reasoning;

● And help researchers take responsibility for the social impact of their
work.

In quantitative research, theory is tested. It is formal, structured, and


deductive.

In qualitative research, theory is often used flexibly. It may guide inquiry,


shape analysis, or emerge from patterns in the data.

In mixed methods research, theory can do both. It may appear as a


traditional social science model or as a participatory framework that
challenges injustice.

The skill of the researcher lies in understanding which form of theory is


appropriate, how to express it clearly, and how to ensure that every part of the
research aligns with it.

CHAPTER 5 — THE INTRODUCTION

(From Creswell & Creswell, 2018 – Research Design: Qualitative,


Quantitative, and Mixed Methods Approaches)

Understanding the Role of the Introduction: Why It Matters and How to


Write It
This chapter begins by positioning the introduction of a research proposal as
a critical gateway into the study. It’s not just about grabbing attention like in
creative writing. In research, the introduction serves a very specific academic
function: to logically justify the need for the study, using a blend of formal
structure, scholarly positioning, and strategic argumentation. The introduction
must persuade your reader—often a supervisor, committee, journal editor, or
funding board—that your study is necessary, timely, grounded in literature,
and feasible within a scholarly framework.

Creswell and Creswell explain that writing a good introduction requires


carefully addressing five interrelated components, each of which builds
toward the next. These five are:

1. The problem (or need) for the study,

2. The review of literature showing what's known and unknown,

3. The deficiencies in current knowledge,

4. The significance or rationale for why the study should be done,

5. And a final overview that outlines the intent of the study.

But before even getting to those elements, the authors make one thing clear:
the introduction must clearly identify the research problem early on. This
is not an abstract idea or a topic—it is a specific issue, concern, controversy,
or gap in understanding that the study aims to address.

Framing the Problem: From General Topic to Researchable Issue

According to Chapter 5, many students mistakenly begin with a broad topic


and assume that naming the topic is enough to justify a study. But Creswell
and Creswell stress that the introduction must move from the general to the
specific, guiding the reader from a wide area of interest to a clear,
researchable problem. For example, “student motivation” is too vague. But
“the lack of motivation among second-year Moroccan high school students in
rural areas and how it impacts EFL classroom engagement” is moving toward
a focused research problem.

The authors advise that the first paragraph of a good introduction should
engage the reader by showing that the topic is important and timely, and
that it touches a real issue in education, psychology, health, or whatever your
field is. This part is often referred to as the “hook”, not in the sense of catchy
writing, but as a statement of urgency or need. Why should anyone care
about this study? What’s at stake if the problem goes unstudied?

They also suggest citing statistics, real-world examples, policy documents,


or personal experience as entry points into the topic. But this must lead
smoothly into stating the research problem, which serves as the foundation
for the entire proposal.

The Problem Statement: Defining What is Not Known

In the next key part of the introduction, the researcher defines the problem
that the study seeks to investigate. This is not simply an area of interest or a
topic the researcher likes—it is a deficiency in what is currently known in the
field.

Creswell and Creswell refer here to the work of social scientists like Babbie
(1990) and others who emphasized that the research problem must meet
several criteria:

● It must be researchable (answerable using data),


● It must be significant (its answer should matter to the field),

● It must be manageable (doable within time and resource limits),

● And it must be grounded in prior literature.

To build this, the researcher must synthesize what the literature already says.
What do we know about this issue? What have past studies found? Where do
they agree or conflict? But more importantly: what is still missing? This is
the core of the problem statement.

An effective problem statement might say something like:

“Although several studies have explored the general impact of


teacher feedback on student motivation, few have examined how
this dynamic operates in Moroccan rural public schools, especially
within the context of second-language learning.”

Here, we’re not just naming a topic—we are identifying a specific gap. And
that gap becomes the reason the study exists.

The Deficiencies Model of an Introduction

This is where Creswell and Creswell introduce their signature structure,


which they call the Deficiencies Model. This model outlines a five-part
framework for writing research introductions that are logically coherent and
academically persuasive. Each part of this model serves a strategic purpose:

1. Topic and opening general introduction: This is the hook—the


starting point that shows the topic is interesting or important.
2. Review of literature and discussion of what's known: This is the
synthesis of previous research that lays the groundwork.

3. Deficiencies in the literature: This is where you explain the gaps,


controversies, or under-researched areas.

4. Significance of the study: Here, you argue why your research


matters—who will benefit, what change it might make, how it adds to
theory or practice.

5. Purpose statement: The conclusion of the introduction, which


transitions into the more technical heart of your proposal.

Each of these pieces builds toward the purpose. You don’t start with your
research questions. You start by justifying the need for your questions. This
is what the introduction is for.

The Significance of the Study: Making the Case

One of the most misunderstood sections of an introduction is the significance


paragraph. This is where researchers must articulate the value of their
work, not in vague terms like “this study is important,” but by specifying
who benefits and how.

Creswell and Creswell explain that this section should name audiences
explicitly: educators, policymakers, scholars, practitioners, students,
institutions, etc. It should also explain how the findings might be used: to
improve practices, inform policy, advance theory, or open up new lines of
research.

For example:
“The findings of this study could provide teachers and curriculum
designers with culturally sensitive strategies to boost EFL
motivation in rural Moroccan classrooms, thereby addressing
ongoing disparities in language learning outcomes.”

This tells the reader: this isn’t just academic busywork. This study has
real-world impact.

Introducing the Purpose Statement

The final part of the introduction is where the researcher introduces the
purpose statement—which becomes the subject of Chapter 6 (and which
we already studied in detail). But here, in Chapter 5, Creswell and Creswell
emphasize that the transition to the purpose statement must be natural and
coherent. It must emerge logically from the previous discussion of the
problem, literature, deficiencies, and significance.

In other words: you don’t just dump a purpose statement on the reader out of
nowhere. You build up to it, layer by layer, so that when the reader arrives at
the sentence beginning with “The purpose of this study is…,” they’re already
primed to understand what’s being proposed and why it matters.

The Purpose Statement in Qualitative Research

In qualitative research, the purpose statement centers on a central


phenomenon—a lived experience, a concept, a process, or a meaning system
that the study seeks to understand or interpret. Qualitative studies are
typically emergent, inductive, and exploratory, so the purpose statement
needs to reflect this openness. It does not claim to “test” anything or
“measure” anything. Instead, it seeks to explore, describe, understand, or
discover something in context.
The chapter recommends that you use neutral, non-directional language,
avoiding words like “affect,” “influence,” “impact,” or “cause,” which
suggest a quantitative logic. Instead, you use verbs like explore, understand,
describe, develop, or discover.

In addition, a good qualitative purpose statement names:

● The strategy of inquiry (e.g., phenomenology, case study,


ethnography, grounded theory),

● The central phenomenon under investigation,

● The participants (or group being studied),

● And the research site (the setting or context of the study).

Let’s consider an example that brings all of this together:

“The purpose of this qualitative phenomenological study is to


explore the lived experiences of female first-generation
university students in Morocco as they navigate cultural
expectations and academic pressure during their undergraduate
years.”

Notice how this statement:

● Clearly names the strategy (phenomenology),

● Uses explore (a neutral, inductive verb),

● Specifies the participants (female first-gen students),


● Describes the phenomenon (their experiences with cultural and
academic tension),

● And names the setting (Morocco, implicitly the universities).

This statement tells the reader exactly what kind of study is being done,
and it reflects the philosophical commitments of qualitative research:
subjectivity, context, complexity, and meaning.

The Purpose Statement in Quantitative Research

Quantitative research operates differently. Its philosophical foundation is


postpositivist, its logic is deductive, and its goal is to test theories or
examine relationships among variables. The purpose statement in a
quantitative study, therefore, is more structured, precise, and formal. It needs
to identify:

● The independent variable(s) (the presumed cause),

● The dependent variable(s) (the outcome),

● Any control, moderating, or mediating variables (if relevant),

● The participants and setting,

● And the design (e.g., experiment, survey, causal-comparative,


correlational).

Moreover, unlike qualitative research, quantitative purpose statements do


make directional predictions. You are encouraged to use language such as
“compare,” “relate,” “influence,” or “impact,” because your goal is to test a
hypothesis or to confirm a theoretical assumption.

Here’s a textbook-quality example:

“The purpose of this quantitative correlational study is to


examine the relationship between students’ perceived teacher
feedback and their academic motivation scores among third-year
secondary students in Rabat, Morocco.”

This sentence shows:

● That the study is correlational (so, not testing causality),

● That the IV is “perceived teacher feedback,” the DV is “academic


motivation,”

● That the participants are “third-year secondary students,”

● And that the setting is Rabat.

This purpose statement sets the stage for hypotheses, variables, instruments,
and statistical analysis. It reflects predictive logic and the values of
measurement, control, and generalization that define quantitative research.

The Purpose Statement in Mixed Methods Research

Writing a mixed methods purpose statement is more demanding because it


has to integrate two research logics (quantitative and qualitative), reflect
their timing, their weight, and the rationale for combining them. It’s not just
about listing both data types—it’s about explaining how and why you are
mixing them to achieve a richer understanding of the problem.
A strong mixed methods purpose statement includes:

● The type of design (e.g., convergent, explanatory sequential,


exploratory sequential),

● The quantitative variables and their role (e.g., predicting or


explaining),

● The qualitative phenomenon and its role (e.g., exploring experiences,


deepening understanding),

● The priority or weight of each method (equal, or dominant/minor),

● The reason for mixing (e.g., to triangulate findings, to build from one
phase to another),

● And the participants and setting.

A well-constructed mixed methods purpose statement might look like this:

“The purpose of this convergent mixed methods study is to


compare quantitative survey results on student engagement
with qualitative interview findings on classroom motivation
strategies among EFL high school teachers in northern
Morocco. The study aims to triangulate data sources to provide a
deeper understanding of teacher practices and student outcomes.”

This statement is doing a lot:

● It shows a convergent design (both types of data collected and


analyzed in parallel),
● Names both types of data (survey and interviews),

● Connects them around a central phenomenon (engagement and


motivation),

● Names the participants and context (EFL teachers, Moroccan high


schools),

● And states the purpose (triangulation for deeper understanding).

This is not a random blending—it’s a methodologically integrated


approach that depends on good design thinking.

WRITING THE PURPOSE STATEMENT

Language, Structure, and Examples for Each Research Approach

The authors begin this section by emphasizing that the language you use in
the purpose statement must match the logic and assumptions of your
chosen research approach. This might seem obvious, but it’s where many
student researchers go wrong. If you’re conducting qualitative research but
use words like “impact” or “effect,” your language is betraying a positivist
assumption. If you’re doing quantitative work but avoid stating your
variables or predicted relationships, you may seem vague or unscientific.

That’s why the chapter provides separate sentence formats—each carefully


tailored to reflect the philosophical and methodological commitments of
the approach it serves. Let’s examine each of these in full.

Purpose Statement Template: Qualitative Research


The qualitative format emphasizes understanding, exploration, and
contextual meaning. The core components of this purpose statement
include:

● The intent of the study (using verbs like explore, understand, describe),

● The central phenomenon or concept,

● The strategy of inquiry (such as ethnography, case study,


phenomenology),

● The participants (individuals or groups),

● And the setting (where the study takes place).

Here is the suggested sentence frame from the book:

“The purpose of this qualitative [strategy of inquiry] study is to


[understand, describe, explore] the [central phenomenon] of
[participants] at [site].”

For example:

“The purpose of this qualitative case study is to explore the


leadership practices of female principals in rural secondary schools
in southern Morocco.”

Let’s break this down:

● “Qualitative case study” is the strategy of inquiry,


● “Explore” is the neutral, inductive verb,

● “Leadership practices” is the central phenomenon,

● “Female principals” are the participants,

● “Rural secondary schools in southern Morocco” is the setting.

This is a textbook-perfect qualitative purpose statement. It tells the reader


exactly what the study is about, without jumping ahead to claims, predictions,
or variables. It respects the open-ended, context-sensitive nature of
qualitative research.

Purpose Statement Template: Quantitative Research

In quantitative research, the purpose statement must name the variables,


describe the relationship between them, and declare the design of the study.
It reflects a deductive logic and usually supports the testing of a theory or
hypothesis.

The elements required are:

● The type of study (e.g., correlational, experimental, survey),

● The independent and dependent variables (plus control, moderator,


mediator if relevant),

● The participants,

● The setting,
● And optionally, the theory being tested.

Here is the recommended sentence format:

“The purpose of this [type of quantitative study] is to [test the


theory of X] by examining the relationship between [independent
variable] and [dependent variable] for [participants] at
[research site].”

You can also include mediators/moderators like this:

“… controlling for [control variables], and examining how


[moderator variable] influences this relationship.”

Example:

“The purpose of this quantitative correlational study is to examine


the relationship between teacher feedback and academic motivation
among high school students in Casablanca.”

This tells us:

● It’s quantitative and correlational,

● The IV is teacher feedback, the DV is academic motivation,

● The participants are high school students,

● The setting is Casablanca.

This is clear, concise, and reflects a strong alignment with quantitative logic.
Purpose Statement Template: Mixed Methods Research

The mixed methods purpose statement is the most complex because it must
integrate the logic, goals, and procedures of both approaches. The purpose
statement must:

● Name the mixed methods design type (e.g., explanatory sequential,


convergent),

● State the intent of the study (why mixing methods adds value),

● Describe the quantitative strand (variables, participants, setting),

● Describe the qualitative strand (phenomenon, participants, setting),

● Explain the mixing logic (e.g., to expand, triangulate, develop


instrument),

● And clearly show the sequence or priority of the strands.

The recommended structure looks like this:

“The purpose of this [type of mixed methods] study is to [state


overall intent of the study], in which [quantitative method:
variables, participants, site] will be used to test [theory], and
[qualitative method: central phenomenon, participants, site]
will be used to explore [qualitative focus]. The reason for
combining both quantitative and qualitative data is to [explain
rationale: triangulation, complementarity, development].”

Example:
“The purpose of this explanatory sequential mixed methods study is
to examine the relationship between online feedback and writing
performance among Moroccan university students through a
survey, followed by interviews with students to explore their
perceptions of teacher feedback in more depth. The rationale for
using mixed methods is to explain the survey results with
qualitative insights.”

This is comprehensive. It:

● Names the design (explanatory sequential),

● Starts with a quantitative phase (a survey),

● Follows with a qualitative phase (interviews),

● States the rationale (explanation, deeper understanding),

● And identifies the phenomenon and population (feedback, Moroccan


students).

That is exactly what a mixed methods purpose statement should do: reflect
methodological integration and a clear rationale.

Summary: Writing a Purpose Statement

A good purpose statement is:

● A precise, formal declaration of your intent,

● Aligned with your research approach (qual, quant, or mixed),


● Complete in its components (what, who, where, how),

● And logically tied to the paradigm and design you are using.

Think of the purpose statement as a blueprint. Everything else in the


proposal (questions, hypotheses, variables, methods, analysis, etc.) must
reflect what you’ve declared in this one core sentence.

It is not a casual statement—it is an intellectual commitment, and it must be


handled with care.

QUANTITATIVE RESEARCH QUESTIONS AND HYPOTHESES

In this section of Chapter 7, Creswell shifts from the open, exploratory nature
of qualitative research to the precise, predictive, and testable logic of
quantitative inquiry. Unlike qualitative research questions—which focus on
meanings, experiences, or processes—quantitative research questions are
formulated to measure variables, test relationships, or compare groups.
They must be stated in a way that reflects the deductive, postpositivist
worldview that underpins most quantitative research designs.

The authors begin by explaining that in quantitative research, you can express
your study’s aims in three different but related forms: (1) research
questions, (2) hypotheses, and/or (3) objectives. While these serve
overlapping functions, the focus in this chapter is primarily on questions and
hypotheses, which are the most common format in formal research studies
and dissertations.

Creswell and Creswell define quantitative research questions as


interrogative statements—that is, they are written as questions—designed
to inquire about the relationships between measurable variables. These
questions are written in the present tense and are specific, focused, and
structured. For example, one might ask: “What is the relationship between
time spent using language learning apps and vocabulary retention scores
among Moroccan university students?” This question implies two clearly
operationalized variables—app usage and vocabulary test scores—and it
signals a correlational design.

The authors emphasize that in quantitative studies, such questions should


always:

● Identify the independent and dependent variables (and, if applicable,


moderators, mediators, or control variables),

● Use language that reflects causal or associational intent,

● Be consistent with the theoretical framework and purpose statement


already defined in earlier chapters.

Creswell and Creswell also stress that these questions should avoid vague
terms like “effectiveness” or “impact” unless those terms are clearly defined
and measurable. For example, instead of saying, “What is the impact of
online teaching?” you should ask, “What is the difference in test scores
between students who receive online instruction and those who receive
in-person instruction in grammar courses?”

Then, in the same section, the authors introduce quantitative hypotheses as


another, and often preferred, form of expressing the researcher’s
expectations. Unlike questions, hypotheses are declarative statements—not
interrogative—and they clearly state what the researcher believes the
expected outcome will be. These are typically based on theoretical
reasoning and are subjected to statistical testing.

There are two types of hypotheses described in this chapter:


1. The Null Hypothesis (H₀): This represents a statement of no
relationship or no difference between variables. It functions as the
default claim that statistical tests are designed to reject. For instance:
“There is no significant relationship between students’ use of language
learning apps and their vocabulary test scores.”

2. The Alternative or Research Hypothesis (H₁): This expresses the


predicted outcome based on the theory. For example: “There is a
significant positive relationship between students’ use of language
learning apps and their vocabulary test scores.”

Creswell and Creswell recommend always stating both the null and the
alternative hypotheses, particularly in studies that involve inferential
statistical tests, such as t-tests, ANOVA, or regression analysis. They also
suggest that these statements should mirror the structure and directionality
of your study’s theoretical claims. That means if your theory suggests that
more app usage will increase scores, then your hypothesis should be
directional—that is, specifying positive or negative relationships.

Moreover, the authors stress that your hypotheses must align precisely with
the variables you defined earlier in your purpose statement and theoretical
framework. There should be no surprise variables introduced in your
questions or hypotheses that weren’t already explained. This ensures logical
consistency across your research design.

Chapter 7 includes detailed examples of hypotheses in experimental and


correlational designs. For instance, in an experimental design, a directional
hypothesis might look like this: “Students who receive immediate feedback
on essays will demonstrate higher revision scores than those who receive
delayed feedback.” Here, the independent variable is feedback timing
(immediate vs. delayed), and the dependent variable is revision score. The
hypothesis not only names both variables but also specifies the expected
direction of the effect.

In correlational studies, hypotheses often use phrases like “positively related


to,” “negatively correlated with,” or “significantly associated with.” These
phrases reflect statistical analysis logic—specifically the use of correlation
coefficients and regression modeling to examine the degree and direction of
association between variables.

The chapter also includes guidance on writing multiple hypotheses when


your study includes more than one research question or examines several
relationships. Creswell and Creswell advise that each hypothesis should be
numbered, and each should correspond to a specific set of variables,
research questions, and analytic procedures.

What’s critical in this part of Chapter 7 is that Creswell and Creswell are not
just giving language formulas—they are giving you the logic of research
design. They are showing that hypotheses are not guesses; they are logical,
testable claims derived from theory, aligned with your purpose, and made
explicit through variables you can measure.

MIXED METHODS RESEARCH QUESTIONS AND HYPOTHESES

In the final part of Chapter 7, Creswell and Creswell turn their attention to
what is perhaps the most demanding task in question formulation: writing
mixed methods research questions. What makes this task especially
complex is that mixed methods research is not just a combination of
qualitative and quantitative methods—it is a philosophically and
methodologically integrated approach. Therefore, the questions must not
only reflect both traditions—they must also be carefully sequenced, aligned,
and justified within the study’s overall design.
The authors begin by acknowledging that while most researchers are
accustomed to writing separate qualitative and quantitative questions, few
are trained in writing an explicit mixed methods research question. This is
a relatively new but essential skill because it signals that the study is truly
mixed—not just in methods, but in logic, structure, and purpose. A mixed
methods research question helps show how the two strands will be
integrated, what order they will appear in, and why the researcher has
chosen to combine them at all.

Creswell and Creswell present three kinds of questions that should appear in
a well-developed mixed methods study:

1. A qualitative research question, which functions exactly as it would


in a qualitative-only study. It focuses on understanding, exploring, or
describing a central phenomenon.

2. A quantitative research question or hypothesis, which reflects the


standard structure of variable-based, testable inquiry—either a causal
relationship, association, or group comparison.

3. A mixed methods question, which explicitly addresses the integration


of the two data types—asking how one type of data will be used to
explain, support, build on, or complement the other.

This third question is what sets a true mixed methods study apart from a
study that just uses two sets of methods side-by-side. The mixed methods
question reflects the core intent of integration—it captures the study’s
rationale in question form. It tells the reader: How will the quantitative and
qualitative data relate to each other? What will the combination accomplish
that each method alone could not?
Let’s take a closer look at how Creswell and Creswell suggest these mixed
methods questions be written.

First, they emphasize that your design type should guide your question
structure. For example, if you are using an explanatory sequential design
(quantitative followed by qualitative), your mixed methods question might
be:

“To what extent do the qualitative interview findings help explain


the quantitative results regarding the relationship between teacher
feedback and student motivation?”

This question makes clear that:

● The quantitative results come first,

● The qualitative data are used to interpret or explain them,

● The intent of mixing is explanatory.

In contrast, in a convergent parallel design, where both types of data are


collected simultaneously, your mixed methods question might be:

“To what extent do the qualitative interview themes about student


engagement converge with the quantitative survey scores of
classroom participation?”

Here, the emphasis is on triangulation—comparing the two datasets to see


whether they lead to similar conclusions or offer different perspectives.

Creswell and Creswell also remind us that in exploratory sequential designs


(where qualitative comes first), the mixed methods question could be:
“How do themes emerging from qualitative interviews inform the
development of a quantitative survey instrument for measuring
student perceptions of feedback?”

This kind of question reflects the instrument development function of


mixed methods—qualitative data used to create more valid and contextually
grounded measurement tools.

The chapter also notes that when the mixed methods question is omitted, it
often signals that the study lacks methodological integration—that the
mixing is superficial or procedural, rather than conceptual. Therefore,
including a mixed methods research question is not only good practice—it is
a sign of design coherence and scholarly maturity.

Creswell and Creswell encourage the use of visual figures and explicit
explanation (such as rationale, priority, timing) alongside the mixed methods
question to further clarify the design. This aligns with their broader guidance
from earlier chapters (especially Chapter 10 on mixed methods procedures)
where the complexity of these studies is best communicated through both
narrative and visual tools.

Finally, the authors remind readers that writing strong mixed methods
questions requires deep familiarity with both paradigms. The questions must
not conflict with the worldview or assumptions of either method. For
example, you should not ask a qualitative question that implies variables or
measurement, nor a quantitative question that includes subjective language or
undefined terms. The mixed methods question must bridge both, without
undermining either.

“METHODS-BASED,” “THEORY-USE-BASED,” and “HYBRID


VARIATIONS”
(As discussed in the final pages of Chapter 6 from Research Design:
Qualitative, Quantitative, and Mixed Methods Approaches)

In the last section of Chapter 6, Creswell and Creswell reflect on the


variations in how researchers formulate research questions and
hypotheses across different studies. They identify three broad categories or
styles that researchers often follow when structuring the core statements of
their study. These are:

1. Methods-Based Variations

2. Theory-Use-Based Variations

3. Hybrid Variations

1. METHODS-BASED VARIATIONS

In this model, the structure of your research questions or hypotheses is


driven directly by your method—qualitative, quantitative, or mixed
methods. It’s what Creswell and Creswell call the “default,” and it’s the one
they spend most of the chapter teaching. It reflects the idea that if you’re
doing a qualitative study, you write central qualitative questions and
subquestions. If you’re doing a quantitative study, you write quantitative
research questions, null and alternative hypotheses, and if you’re doing a
mixed methods study, you present quant + qual questions + a mixed
methods question.

In other words, your entire structure of inquiry follows your


methodological choice. You choose your approach, and that dictates what
your questions and hypotheses should look like.

Why it matters:
This is the clearest, most common, most “teachable” approach, and it makes
your study coherent. It ensures that your purpose statement, your research
questions, and your design all match each other. So, for example:

● In qual, you don’t write hypotheses—you write “What is the


experience of...?”

● In quant, you don’t ask broad exploratory questions—you write “What


is the effect of X on Y?”

● In mixed, you deliberately integrate and combine both forms.

Example:

Let’s say you’re doing a qualitative phenomenological study about the


experiences of Moroccan teachers dealing with student trauma. A
methods-based approach would lead you to write:

“What is the lived experience of secondary school teachers in


Morocco when supporting students who exhibit signs of trauma?”

You wouldn't try to write hypotheses here—because your method doesn’t


require or justify them.

2. THEORY-USE-BASED VARIATIONS

This second variation shifts the organizing force from method to theory.
Here, the way you write your questions or hypotheses is shaped more by the
theoretical framework you are working within than by whether your study
is qual, quant, or mixed.
What Creswell and Creswell are saying here is that sometimes, theory
dominates the research logic. So, your questions might directly reflect the
assumptions or propositions of the theory—even if that makes your question
structure deviate from what’s typically expected for that method.

Why it matters:

Sometimes your theoretical lens is stronger than your methodological


label. For example, if you’re using feminist theory, you might write
questions that interrogate power, voice, and intersectionality, even in a
study that otherwise looks like a case study or ethnography. Likewise, if
you’re using critical race theory, your questions might focus on structures
of oppression, counter-narratives, and systems of power, rather than on
phenomena or experiences in the neutral sense.

Example:

Let’s say you’re conducting a case study of women in leadership roles in


Moroccan public schools. If you’re using a feminist theory lens, your
research questions might look like:

“How do female school leaders construct and express leadership in


a male-dominated system?”
“In what ways do institutional structures reproduce gender inequity
in school administration?”

Here, even if your method is qualitative and case-based, your questions are
shaped by theory. The voice of the theory becomes dominant, and the
structure of your inquiry reflects its priorities.

This approach is common in critical, transformative, postmodern,


feminist, or indigenous research—where the research isn’t just about
understanding a topic, but critiquing it, challenging it, and rebuilding
knowledge from the margins.

3. HYBRID VARIATIONS

This third variation is where things get more flexible—and possibly more
creative. Creswell and Creswell refer to “hybrid variations” as combinations
of styles. These might appear in studies where researchers write both
research questions and hypotheses (even in a single design), or where they
mix formats depending on their needs.

A hybrid variation is a sign that the researcher is customizing the structure


of their study to reflect the complexity of the topic, the multi-layered
nature of the design, or the specific challenges of the field. It’s not
chaos—it’s intentional flexibility.

Why it matters:

Sometimes you need to do this. Maybe your study explores how teachers feel
about a new pedagogical method (qual), but also measures how student
grades change (quant). You might have a central qualitative question and
also include a quantitative hypothesis. This doesn’t make your study
incoherent—it makes it hybrid. As long as the rationale is clear and justified,
Creswell and Creswell support this kind of approach.

Example:

A mixed methods study investigating the effect of flipped classrooms in


Moroccan high schools might include:

● A quantitative hypothesis: “Students in flipped classrooms will score


higher on standardized English tests than those in traditional
classrooms.”

● A qualitative question: “How do teachers describe their experience of


using flipped classroom strategies with rural students?”

● A mixed methods question: “To what extent do teachers’ experiences


help explain differences in student achievement in flipped classrooms?”

This is a hybrid model—each type of question serves a function, and the


study embraces design complexity instead of reducing it.

Final Thought on These Three Variations (As Presented in Chapter 6)

By identifying these three variations—methods-based, theory-use-based,


and hybrid—Creswell and Creswell are giving you permission to be both
methodologically rigorous and intellectually flexible. You must know
when to follow the structure and when to adapt it based on your goals,
your theory, your participants, and the knowledge you hope to generate.

In short:

● Methods-based means your research structure follows the method.

● Theory-use-based means theory is the organizing center.

● Hybrid means you're drawing from both, intentionally and


transparently.

Understanding the Foundations of Quantitative Methods

This chapter opens by making a clear philosophical and methodological


connection: quantitative methods are grounded in a postpositivist
worldview. That means researchers assume reality exists but can only be
known probabilistically, not absolutely. Postpositivists believe that causes
probably determine outcomes, and they use measurable variables,
controlled designs, and statistical inference to discover patterns and test
theories. In this context, Creswell and Creswell show us that survey designs
and experimental designs are two of the most widely used forms in this
tradition. Both focus on studying relationships among variables, but they
differ in the level of control they offer and the kind of inference they allow.

The chapter clarifies this difference through a simple yet effective contrast: if
you are testing whether children who play violent video games are more
likely to engage in playground aggression, you're asking a correlational
question. A survey method could help you explore this relationship by
measuring both variables and seeing if there's a pattern. But if you're testing
whether playing violent video games causes aggression, then you need a true
experimental design, where you can randomly assign participants and
control variables to isolate causal effects. This distinction between
association and causation is fundamental in quantitative research
design—and it dictates whether a survey or experiment is the right tool for
the job.

Survey Design: Purpose, Rationale, and Use

Survey research is presented as an efficient way to quantify trends,


opinions, and behaviors within a population by studying a sample.
According to Creswell, the core purpose of survey research is to answer
questions that fall into three types: descriptive, relational, and predictive. A
descriptive survey might simply ask, “What percentage of nurses support
abortion access in hospitals?” A relational survey goes further, asking, “Is
support for abortion access associated with support for hospice care?” A
predictive survey, usually conducted over time (i.e., a longitudinal survey),
might ask, “Does support for abortion services at Time 1 predict nurse
burnout at Time 2?”

This section emphasizes that survey design is particularly appropriate when


the goal is description or correlation, not manipulation. Creswell also point
out that a good method section for a survey proposal should explain why a
survey is appropriate—often by showing that experimental manipulation is
not feasible or ethical, and that surveys offer a faster, broader, or more
practical data collection method.

Further, the design should specify whether the survey is cross-sectional (data
collected at one point) or longitudinal (data collected over time). The
researcher must also explain the mode of delivery—for example, will the
survey be mailed, emailed, administered by phone, or conducted
face-to-face? Each has trade-offs. Online surveys may save time but exclude
participants without internet access. In any case, the researcher is expected to
defend their choice with evidence from existing literature on survey
methodology (e.g., Fowler 2014, Fink 2016).

Population and Sampling in Survey Studies

Creswell dedicate an extensive discussion to how to correctly define the


population and sampling design in a survey study. This is not a matter of
just picking people at random. It is a rigorous, theory-driven process. First,
the researcher must identify the target population—the group to whom the
results are meant to generalize—and clearly explain its characteristics and
size. If the total population cannot be enumerated, the researcher must
acknowledge this and define how the sample will approximate that
population.

Then comes sampling design, which includes whether the sample is selected
in a single stage or multistage (clustering). A single-stage design means that
the researcher directly samples individuals from a known list. A multistage
design, on the other hand, might first sample organizations (like schools) and
then sample individuals within those clusters. This is especially useful in
large or hard-to-reach populations.

A further distinction is made between probability sampling—in which every


member of the population has a known, non-zero chance of being
selected—and nonprobability sampling, such as convenience sampling,
where individuals are chosen based on ease of access. Probability samples,
especially random or systematic samples, are preferred for statistical
generalization. But Creswell is realistic: he acknowledge that in many social
science contexts, true random sampling is often impractical, so researchers
must justify any use of non-random methods.

The author also addresses stratified sampling, which ensures that certain
subgroups (e.g., gender, ethnicity) are proportionally represented. This is
important when the population has known subgroups that could affect the
outcome. Stratification ensures fairness and accuracy in drawing conclusions.

Finally, they stress the need for power analysis to determine appropriate
sample size. This is not just a mathematical detail—it’s an ethical and
methodological necessity. Too small a sample and your results might not be
statistically reliable; too large and you're wasting resources. The power
analysis requires input like expected effect size, desired alpha level (Type I
error), and beta level (Type II error), and can be performed using tools like
G*Power. Creswell walk the reader through an example using burnout
among nurses, showing how these numbers are derived and what they mean.

Instrumentation and Data Collection Procedures in Quantitative


Research

Once the population and sampling decisions have been made, the researcher
must turn to the critical issue of instrumentation—that is, what tools or
devices will be used to actually measure the variables in the study. In this
section, Creswell emphasize that the choice of instruments is never casual or
intuitive—it must be explicit, justifiable, and aligned with the research
questions and the theory guiding the study. A variable only becomes real in a
quantitative study when it is operationalized through a valid instrument.

There are two broad options discussed: using an existing instrument or


developing a new one. Each choice comes with its own responsibilities.
When using a previously published instrument, the researcher must explain
where it came from, how it was developed, what population it was tested on,
and—most importantly—what evidence exists for its validity and reliability.
Validity here refers to the degree to which the instrument measures what it
claims to measure, and reliability refers to the consistency of results across
repeated uses. These psychometric properties should be clearly cited from
previous studies, or, if being tested again, the study must include a validation
phase.

If developing a new instrument, the researcher takes on a much heavier


burden. Creswell and Creswell stress that you cannot simply make up a
questionnaire and assume it measures what you want. You must pilot test it
on a small sample, analyze the data for internal consistency, and conduct
statistical analyses (like Cronbach’s alpha for reliability, or factor analysis for
construct validity). New instruments must also be aligned with your
conceptual definitions of each variable—ensuring that your theoretical
constructs are faithfully translated into measurable indicators.

After instrumentation is addressed, the chapter turns to data collection


procedures. Here, Creswell and Creswell advise that you write this section
of your method chapter in a step-by-step, chronological format. Readers
need to know exactly what was done, by whom, when, and how. For
example, if you conducted a classroom survey, you need to state how
students were recruited, when the survey was administered, how consent was
obtained, whether it was anonymous, how long it took, and how data were
recorded and stored. This level of transparency is not bureaucratic—it’s what
makes the study replicable and credible.

They also recommend being clear about permissions—for example, if you’re


using an instrument created by another scholar, you must state whether
permission to use or adapt it was obtained. This aligns with ethical research
conduct, which Creswell and Creswell emphasize throughout the book,
particularly in later chapters on ethics and writing.

Experimental Research Designs: Understanding Cause and


Control

Following their in-depth treatment of surveys, the chapter moves into the
domain of experimental research, which Creswell and Creswell describe as
the most rigorous form of quantitative inquiry for testing causal
relationships. An experiment is defined as a design in which an independent
variable is manipulated to determine its effect on a dependent variable,
under conditions of high control. The key here is the idea of manipulation
and random assignment—which separates true experiments from other
forms of research like surveys or observational studies.

They present a core definition: experimental research involves applying a


treatment to one group and comparing the outcome to a group that does
not receive the treatment, with both groups being randomly assigned. The
simplest form is the pretest-posttest control group design, where both
groups are measured before and after the intervention.

The chapter walks us through the notation used to represent experimental


designs—often drawn from Campbell and Stanley’s (1963) classic work. In
this notation:

● R represents random assignment,


● O stands for observation or measurement (i.e., pretest or posttest),

● X represents the experimental treatment.

So a classic true experimental design might be expressed as:

ROXO
ROO

This means that both groups are measured before and after, but only one
receives the treatment.

Creswell and Creswell then explore several types of experimental designs:

● True experimental designs, which include randomization and often


both pretest and posttest;

● Quasi-experimental designs, which lack random assignment and may


only have a posttest;

● And single-subject designs, often used in clinical or behavioral


sciences, where one person or a small group is observed over time,
across multiple conditions.

The author emphasize that internal validity—the degree to which the results
can be attributed to the treatment rather than other factors—is the holy grail
of experimental research. But internal validity is constantly under threat from
potential confounds: other variables that could be influencing the outcome.
For instance, what if students in the control group had a substitute teacher
while the treatment group had their regular teacher? That’s a confound.
Experimental designs must anticipate and minimize such threats.
They also highlight external validity, which refers to how well the results
generalize beyond the study’s setting and participants. Sometimes, increasing
internal validity (through tighter control) comes at the cost of generalizability.
Researchers must balance these competing priorities.

Writing the Method Section in Quantitative Studies

Toward the end of the chapter, Creswell and Creswell summarize how all of
this translates into a well-written method section for a quantitative research
proposal. This section should be written in past tense if the study has been
completed, or future tense if it's still being proposed. The structure of this
section should be clean and predictable. They recommend using clear
headings for each subsection: research design, population and sample,
instrumentation, data collection procedures, and data analysis plan.

Each subsection should be written in explicit, step-by-step detail. The goal


is to make the study replicable—not just understandable. A good method
section is one that another competent researcher could follow and repeat,
producing similar results.

In writing about data analysis, for example, it’s not enough to say “data were
analyzed statistically.” You must name the statistical tests, explain what they
were used to test, and indicate what software (e.g., SPSS, R) was used. You
should also explain how you dealt with missing data, outliers, or violations of
statistical assumptions.

At every step, the authors emphasize alignment: the research questions or


hypotheses must match the variables, which must match the instrumentation,
which must match the analysis. This is not a rigid checklist—it’s a matter of
intellectual coherence.

CHAPTER 9 – QUALITATIVE METHODS


This chapter begins by declaring that qualitative research operates according
to a distinct logic of inquiry, one that is rooted not only in different methods
but also in different assumptions about knowledge and reality compared to
quantitative research. While both qualitative and quantitative methods
involve structured processes—formulating questions, collecting data,
analyzing it, and interpreting findings—qualitative methods are deeply
characterized by their flexibility, their reliance on textual or visual data, and
their attention to meaning, context, and researcher subjectivity. Writing a
method section for a qualitative study requires more than just listing steps; it
involves educating the reader about the logic and richness of qualitative
inquiry, and carefully justifying every decision you make.

From the outset, the authors insist that the qualitative method section must
clarify several essential components. These include: the intent of the
qualitative design; the specific strategy of inquiry (like case study,
ethnography, or phenomenology); the researcher’s positionality and
reflexivity (their personal role in the study); the types of data sources being
used; the procedures for recording data; the methods of analysis; and
finally, the approaches used to ensure the accuracy and trustworthiness
of the findings. These steps are not arbitrary; they reflect decades of
qualitative traditions, and each has philosophical significance. A proposal or
final report must make each one explicit, so that the reader can follow not
only what was done, but why it was done.

Characteristics of Qualitative Research: What Makes it Qualitative?

The chapter moves into a foundational section that outlines the defining
characteristics of qualitative research. These are not optional features—they
are the core ingredients that distinguish qualitative work. The authors list
several key traits:

First, qualitative research takes place in the natural setting. This is perhaps
the most sacred tenet of qualitative inquiry. Rather than isolating variables in
a lab or sending out a cold instrument, qualitative researchers go to the field.
They gather data where life actually unfolds. This commitment to real-life
contexts allows the researcher to see behavior, language, and interaction as
they happen, not in artificial conditions.

Second, the researcher is the key instrument. Unlike quantitative


researchers who rely on pre-made instruments (like standardized tests),
qualitative researchers themselves collect the data—through interviews,
observations, and fieldwork. This central role means that the researcher’s
insights, biases, and interpretations are part of the data collection process
itself. It also means that they must be accountable for how their presence
affects the research.

Third, qualitative research relies on multiple sources of data. Rather than


restricting the study to one instrument, qualitative researchers blend
interviews, documents, observations, audiovisual materials, and even
artifacts. These varied data forms are not pre-coded; they’re open-ended,
richly descriptive, and require interpretation. The researcher’s job is to make
sense of patterns across them.

Fourth, the process is both inductive and deductive. Researchers often


begin inductively, building up themes from detailed data. But they may also
loop back, deductively checking whether the emerging themes are supported
by further evidence. This back-and-forth movement is essential—it reflects
the complex process of arriving at a deep, grounded understanding.

Fifth, the focus is always on the participant’s meaning—not the researcher’s


or the literature’s. This requires the researcher to listen, to bracket their
assumptions, and to let participants define what matters.

Sixth, the design is emergent. You don’t fully plan a qualitative study before
you start. The questions may shift. The participants may change. The
researcher must adapt, letting the research evolve as they engage more deeply
with the field.

Seventh, reflexivity is essential. The researcher must reflect on how their


identity, background, and beliefs influence the study. This isn’t just about
bias—it’s about acknowledging that knowledge is constructed in the space
between the researcher and the participant.

Finally, qualitative research produces a holistic account. The researcher


seeks to show the full complexity of a situation, offering multiple
perspectives and interacting factors, rather than reducing everything to a
linear cause-effect model. This “thick description,” as Geertz called it, helps
readers enter the world of the participants and understand how meaning is
made.

Qualitative Research Designs: Choosing the Right Strategy of Inquiry

The next section of the chapter turns to designs—the specific strategy or type
of qualitative approach the researcher uses. Creswell and Creswell build on
the earlier chapters to explain that while dozens of designs exist (as shown in
Tesch’s list of 28 or Wolcott’s tree of 22), there are five central approaches
they recommend for social and health sciences: narrative research,
phenomenology, ethnography, grounded theory, and case study. Each of
these designs has its own roots in the social sciences and its own logic of
data collection, analysis, and presentation.

For example, narrative research focuses on individual stories—often


collecting life histories or personal accounts to reconstruct meaning.
Phenomenology investigates the essence of lived experience, asking what
it’s like to go through a certain event or condition. Ethnography explores
culture-sharing groups, focusing on beliefs, behaviors, and language.
Grounded theory aims to build theory from data, especially about
processes or social interactions. Case study examines a bounded system
in-depth—whether a person, group, institution, or event.

The authors advise that a good methods section should clearly name the
design, explain its origin and definition, justify why it fits the research
purpose, and show how it will shape all other aspects of the study—from the
title, to the data collection, to the way the results are written.

The Researcher’s Role in Qualitative Research: Reflexivity and


Positionality

In this next section of Chapter 9, Creswell and Creswell highlight something


that distinguishes qualitative inquiry not just methodologically, but
philosophically: in qualitative research, the researcher is not an external
observer trying to remain invisible or neutral. Instead, the researcher is an
instrument—a living, breathing presence whose background, values,
identity, and perspectives inevitably shape the research.

This is why every qualitative methods section must include a statement about
the researcher’s role, sometimes called a positionality statement. This is
not a confessional or personal anecdote—it is an academic disclosure of how
the researcher is situated in relation to the topic, the participants, and
the setting, and how that positioning might influence data collection,
interpretation, and even access to the field.

Creswell and Creswell explain that a good statement of researcher role


includes both personal and professional information. For instance, if a
researcher is studying classroom management practices in public high
schools and has previously worked as a teacher in that same system, that
background must be disclosed. Why? Because it might shape their
interpretations, influence how participants speak to them, or even create
biases that color what they notice or expect to find.
This section stresses that subjectivity is not a weakness in qualitative
research—it is a source of insight, but only if handled transparently and
critically. This is the meaning of reflexivity: the ongoing process of
examining how one’s own position—gender, race, class, experience, political
commitments—interacts with the research process. Reflexivity is not a
one-time disclosure; it is a discipline, a way of thinking throughout the entire
project.

Creswell and Creswell encourage qualitative researchers to be honest and


open about their connections to the topic. If you chose your topic because of
something in your life—your upbringing, your identity, a problem you care
about—that’s not only acceptable, it’s meaningful. But you must also
recognize that this connection gives you both insight and blind spots, and
you have to remain aware of both. Writing about your role, then, is not about
removing yourself from the study—it’s about naming your presence and
your influence with humility and clarity.

They also recommend stating what access you have to the site or participants,
how your relationships might shape the dynamics of interviews or
observations, and how you plan to manage your role ethically—especially if
you're studying a setting you're personally connected to.

For example, if you’re a Moroccan MA student studying how high school


students experience English instruction, and you went to a similar school
yourself, that’s a meaningful connection—but you need to explain it, reflect
on it, and show how you’ll avoid assumptions based on personal memory or
projection.

Data Collection Procedures in Qualitative Research: Flexibility


with Structure
Creswell and Creswell begin this section by reminding us that while
qualitative research designs are flexible and often emergent, data collection
is still systematic. It is not random or casual. Even though a qualitative study
may evolve over time, the researcher must still provide a clear and
transparent plan—especially when writing a methods section for a proposal
or dissertation. That plan must explain, in coherent academic language, what
data will be collected, from whom, how, under what conditions, and why
those decisions are appropriate for the research question and design.

The first step in data collection is identifying the participants—also referred


to as the “unit of analysis.” In qualitative research, participants are not
selected to be “representative” in a statistical sense, as in quantitative work.
Instead, they are chosen through purposeful sampling—a logic that
prioritizes information-rich cases. This means selecting individuals or sites
that can best help the researcher understand the central phenomenon.
The term “purposeful” is key here: the sample is chosen with intention, not
for generalizability, but for depth and insight.

There are several types of purposeful sampling that Creswell and Creswell
mention or imply:

● Maximum variation sampling: selecting participants that represent a


wide range of characteristics or experiences.

● Homogeneous sampling: choosing participants who share certain


characteristics.

● Snowball sampling: where participants refer the researcher to other


potential participants.

● Convenience sampling: used when access is limited—though this


must be justified carefully, as it can limit credibility.
After selecting participants, the researcher must explain how many people
will be included in the study. Unlike quantitative research, qualitative studies
don’t aim for a statistically significant sample size. Instead, they aim for data
saturation—the point at which new data no longer yield new themes or
insights. In practice, this often ranges from 1 to 25 participants in
phenomenological studies, 20–30 in grounded theory, and one or two cases
in case study research. But these are not hard rules—what matters is the
richness and appropriateness of the sample to the research question.

Next, Creswell and Creswell turn to the types of data collected. The core
sources include:

● Interviews: either one-on-one, semi-structured, open-ended, or focus


groups. These allow participants to express meaning in their own
words.

● Observations: where the researcher takes on a spectrum of roles from


full observer to full participant.

● Documents: written materials like letters, emails, policies, social media


posts, etc.

● Audio-visual materials: photographs, videos, recordings, and even


maps or charts.

Each data type has its own logic and strengths, and a good methods section
must not only say what data will be collected, but why—that is, how each
source contributes to answering the research question.

The authors emphasize that qualitative data are usually collected using
protocols or field notes, and these tools must be described in the method
section. For interviews, this might mean describing the interview
protocol—a set of guiding questions (not a strict questionnaire). For
observations, this might mean describing what will be observed, how notes
will be taken, and what observer role will be adopted. Creswell and
Creswell point out that these tools must be piloted and refined, just like
instruments in quantitative research.

Finally, this section addresses ethical considerations: informed consent,


protecting participant identity, gaining access to the field, and being sensitive
to cultural, social, and emotional dynamics. Researchers must make clear that
they are acting with integrity, securing permissions, and managing the
emotional labor of fieldwork with care and respect.

Recording Procedures: Capturing the Field with Rigor and


Respect

In this section, Creswell and Creswell emphasize the importance of having a


systematic plan for how qualitative data will be recorded. While qualitative
research often unfolds in natural settings, this does not mean data collection
is informal or casual. It must be transparent, consistent, and justifiable,
especially when you're writing the methods section of a proposal or
dissertation.

Researchers are expected to use protocols—structured tools for recording


interviews, observations, and reflections. For interviews, this might include
an interview guide or schedule, a physical or digital document listing
open-ended questions, probes, and space for notes. These protocols help
ensure that the researcher maintains consistency across interviews while
remaining open to unexpected directions.

For observations, protocols should include checklists, descriptive fields, or


structured categories to capture setting, activity, interaction, mood, and
context. Observational data can include fieldnotes, sketches, photographs,
or audiovisual recordings, and it’s essential to explain how these are stored,
organized, and linked to participant identities in ethically appropriate ways.

Creswell and Creswell highlight the use of recording devices (audio or


video), and they stress that informed consent must cover this explicitly.
Participants must know they are being recorded, and they must approve it.
These recordings later become the raw transcripts that will be analyzed,
coded, and interpreted. A complete method section should clarify who is
transcribing, how transcription is handled, and how the researcher will
deal with transcription errors or ambiguities—especially with accents,
dialects, or emotionally charged speech.

Data Analysis: The Intellectual Labor of Making Meaning

This next section of the chapter is a dense and layered roadmap to the process
of analyzing qualitative data. Creswell and Creswell describe qualitative
data analysis as spiral-like, not linear. You do not move from point A to
point B in a straight line. Instead, you circle back repeatedly, refining codes,
testing themes, and re-reading texts. The process is iterative, reflexive, and
emergent—new ideas surface as you work, and the researcher must be
willing to adapt and deepen the analysis continuously.

The authors outline six core steps in qualitative analysis (a sequence


expanded from Creswell’s earlier works):

1. Organizing and preparing the data: This means transcribing


interviews, scanning documents, sorting field notes, labeling
photographs, and creating digital folders or qualitative data software
files (e.g., in NVivo, Atlas.ti, or MAXQDA). The goal is to make the
data manageable and accessible.

2. Reading through all the data: Before coding anything, the researcher
must immerse themselves in the material. This step is about getting a
sense of the whole, reading slowly, reflectively, and letting initial
impressions or “sensings” emerge.

3. Beginning detailed coding: Here begins the process of labeling


segments of data with codes—short phrases or words that capture the
essence of what's being said. Codes can be descriptive (“teacher
praise”), process-based (“resisting authority”), or in vivo (direct
quotes used as codes). This step may involve hundreds of codes,
which are gradually grouped and refined.

4. Generating categories or themes: Once codes are collected, the


researcher clusters them into themes—larger interpretive patterns that
reflect recurring ideas, tensions, or insights. A theme is more than a
category; it’s a conceptual thread that runs through the data. For
example, a study of teacher burnout might yield themes like “emotional
exhaustion,” “loss of control,” or “resistance and resilience.”

5. Representing the themes: Creswell and Creswell encourage clear,


layered presentation of themes in the final write-up. This might take the
form of narrative discussion, quotes from participants, tables,
visual models, or diagrams that illustrate how themes interconnect.

6. Interpreting the meaning of themes: The final step is the deep


interpretive work. The researcher reflects on what the themes mean,
how they connect to literature or theory, and what insights or
implications they hold for the field. This is where reflexivity, intuition,
and creativity meet critical reasoning. What do these voices tell us
about the world?
Creswell and Creswell remind us that interpretation should be grounded in
the participants’ words and the researcher’s analytic engagement—not
speculative claims or overgeneralizations.

Validation in Qualitative Research: Trustworthiness,


Credibility, and Ethical Rigor

In this section, Creswell and Creswell make it absolutely clear: qualitative


research must be trustworthy. While qualitative studies do not rely on
statistical indicators of validity or reliability as quantitative research does,
they are still subject to scrutiny—perhaps even more so—because of their
interpretive, flexible, and subjective nature. That’s why qualitative
researchers must proactively demonstrate that their findings are not merely
personal opinion or anecdotal storytelling, but the product of
methodologically rigorous and ethically sound inquiry.

The authors use the term validation broadly, borrowing from Lincoln and
Guba’s foundational concept of trustworthiness, which includes several
overlapping criteria: credibility, transferability, dependability, and
confirmability. These are the qualitative counterparts to the quantitative
ideas of internal validity, external validity, reliability, and objectivity.

Creswell and Creswell present eight key strategies for ensuring validation in
qualitative research. Though they do not expect every study to use all eight,
they emphasize that a strong qualitative proposal should use at least two or
more, and clearly explain them in the methods section.

The first and most widely respected strategy is triangulation. This involves
using multiple data sources, researchers, theories, or methods to confirm
findings. If several different kinds of evidence point to the same conclusion,
it strengthens the claim.
The second is member checking. This means going back to the participants
with the findings—whether in summary, full transcripts, or themes—and
asking: Does this reflect your experience? This is a powerful way to ensure
that the interpretation honors participants’ own voices and avoids
misrepresentation.

The third is the use of rich, thick description. This refers to writing that
paints a full, vivid picture of the setting, the people, the actions, and the
emotions involved. It allows the reader to enter the world of the participants
and judge for themselves whether the findings feel trustworthy or relatable.

The fourth strategy is clarifying researcher bias—which connects back to


the earlier discussion of reflexivity. Here, the researcher openly
acknowledges their positionality: how their background, experiences, values,
and assumptions may have influenced the study.

The fifth is presenting negative or discrepant information. This means not


hiding data that contradicts your themes. Instead, researchers are encouraged
to include complexities, exceptions, and contradictions in the findings. This
shows intellectual honesty and strengthens credibility.

The sixth strategy is prolonged engagement and persistent observation.


Spending extended time in the field allows the researcher to develop trust
with participants, understand context deeply, and see beyond surface
impressions.

The seventh is peer debriefing—sharing your data, emerging themes, and


interpretations with an informed peer or advisor who can challenge your
assumptions and help refine your thinking.

And finally, the eighth strategy is the use of an external audit. This is a
formal process where an independent scholar reviews your entire
project—from data collection to analysis to interpretation—and evaluates
whether the process was rigorous, ethical, and consistent.

All these strategies signal to your audience—whether academic, professional,


or community-based—that your study can be trusted, not because it is
statistically proven, but because it is transparently constructed, ethically
sensitive, and critically interpreted.

Writing the Qualitative Report: Narrative, Voice, and Analytical


Power

Having addressed how to validate a qualitative study, Creswell and Creswell


conclude the chapter with a discussion of how to write up the results. Here,
the tone shifts from methods to presentation, and the authors recognize that
writing in qualitative research is not just a mechanical process—it is a form
of representation. It is how the researcher gives form and voice to the
people and phenomena studied.

They explain that qualitative reports are typically narrative in structure.


That means they are written in flowing, human-centered language, rather than
in numbered, statistical bullet points. They are often organized by themes,
with each theme presented in its own section, supported by direct quotes
from participants and followed by the researcher’s interpretive commentary.

The use of participant voices is a defining feature of qualitative writing.


This means including direct quotations—not just paraphrases—in a way that
brings the reader close to the lived experiences of the participants. These
quotes must be selected carefully: not just dramatic soundbites, but passages
that illustrate key ideas, contradictions, or emotional nuances.

Creswell and Creswell also discuss the use of figures, visuals, and matrices
in presenting qualitative findings. While qualitative work is narrative, it still
benefits from visual aids—theme maps, coding trees, process diagrams—that
help organize complex ideas. These tools also make your analysis more
transparent to the reader.

Importantly, the chapter reminds us that writing is interpretive. Every


decision—what to quote, how to title a theme, what order to present findings
in—is part of the researcher’s analytical voice. This doesn’t mean that
writing is arbitrary or biased, but it acknowledges that knowledge is always
constructed, not simply discovered. Therefore, qualitative writing must be
honest, thoughtful, and self-aware.

Finally, the authors encourage researchers to write in a way that reflects their
design tradition. A narrative study might present findings as a story. A
grounded theory study might use a process model. A case study might
organize results by case and subcase. A phenomenological study might
begin with a thick description of the experience and then move toward the
distilled essence. Each design has its own rhetorical style, and the writing
must align with it.

Closing Reflection: Why This Chapter Matters

Chapter 9 ends not with a formula, but with a philosophy: qualitative


research is about meaning. It is about honoring the complexity of lived
experience, engaging deeply with participants’ voices, and constructing
knowledge that is grounded, ethical, and richly human. A qualitative methods
section is not just a list of steps—it is a demonstration of thoughtful design,
transparent practice, and respect for the people and worlds being
studied.

The rigor of qualitative research lies not in control and measurement, but in
reflexivity, depth, and clarity—in making your process visible, your
reasoning logical, and your interpretations both faithful and critical. Creswell
and Creswell give you not only the roadmap for this, but the language and
structure you need to write and defend your own work.
Why We Mix Methods and What This Chapter Does

The chapter begins by introducing the core idea of “mixing” as not merely
the coexistence of quantitative and qualitative data, but as the integration of
both to gain deeper insights into a research problem. Creswell and Creswell
emphasize that both forms of data offer distinct kinds of knowledge:
qualitative data provides detailed, open-ended insights into human
experiences, while quantitative data offers standardized, measurable variables
suitable for generalization. When used together, they allow the researcher to
overcome the limitations of each approach by leveraging their
complementary strengths. The concept of mixing isn't random or stylistic—it
is methodological. It is part of a research tradition that has its own theories,
assumptions, procedures, and justifications. Thus, writing a mixed methods
procedure section means doing more than saying, “I’ll use both types of
data.” It means designing the study in a way that intentionally, logically, and
rigorously integrates the two into a unified structure.

This is why the authors stress that a mixed methods procedure section should
begin by explaining what mixed methods research is, why it is being used,
what design is chosen, and how every element of the study—from data
collection to analysis to interpretation—will be shaped by this methodology.
This is not a loose mixing of techniques but a structured and well-theorized
research strategy.

Mixed Methods as a Methodology: Framing It in Academic


Discourse

The next section explains that mixed methods research is not simply a tool or
a technique—it is a methodology, with a rich history, theoretical grounding,
and set of procedures. The authors trace its development from the late 1980s
and early 1990s to its current status as a recognized methodology supported
by major journals, textbooks, and academic communities. They cite
important milestones such as the Handbook of Mixed Methods (Tashakkori &
Teddlie), and they highlight its growing presence in disciplines like
education, health sciences, public policy, and social work.

By calling it a “methodology,” the authors mean that mixed methods research


includes:

● A philosophical foundation, often rooted in pragmatism (a view that


values practical solutions and acknowledges multiple perspectives);

● A theoretical framework, which might guide how the two types of


data are connected;

● And a set of procedural steps—designs, sampling strategies, data


collection phases, integration techniques—that give coherence to the
project.

A researcher writing a mixed methods section must educate their readers on


this background. This includes explaining that mixed methods research is
defined by:

1. The collection of both qualitative and quantitative data in response to


research questions;

2. The use of rigorous methods for analyzing each type of data;

3. The integration of those data at one or more stages;

4. The design structure that dictates how and when this mixing occurs;

5. And the worldview or theory that supports the need for integration.
The authors also suggest explaining the terminology used. While many
phrases exist (multimethod, integrated, mixed methodology), the academic
field has mostly converged around the term mixed methods.

Why Choose Mixed Methods? Academic and Practical


Justifications

Creswell and Creswell now turn to the reasons for choosing mixed methods.
At the general level, researchers may choose it to benefit from the
strengths of both approaches—like adding statistical weight to human
stories, or adding human meaning to numerical findings. At a more practical
level, mixed methods is suited to complex research questions that require
multiple forms of evidence. For example, you might use statistics to show a
pattern, but interviews to explain why that pattern exists.

They then list several specific scenarios where mixed methods is especially
appropriate:

● To compare different perspectives on the same issue (e.g., student


survey scores vs. teacher interviews on classroom engagement);

● To explain unexpected quantitative results with follow-up qualitative


data;

● To develop new instruments based on qualitative findings, which are


then tested in a broader population;

● To add context to an experiment or evaluation by embedding


participant voices;

● To document diverse experiences within a case study;


● To evaluate both the process and outcomes of an intervention;

● Or to represent marginalized groups more fully by combining voices


with generalizable trends.

In each of these cases, mixed methods is not used just for show—it is used to
answer research questions more fully than either method could do alone.

The Convergent Design: Parallel but Equal

The convergent design, sometimes called the concurrent triangulation


design, is one of the most widely used and straightforward mixed methods
designs. In this approach, the researcher collects both qualitative and
quantitative data during the same phase of the research process, analyzes
each data set separately, and then brings the two together to compare,
contrast, or merge the results.

The goal here is to obtain complementary perspectives on the same


phenomenon. Creswell and Creswell make it clear that the power of the
convergent design lies in its symmetry—both data types are given equal
weight (unless otherwise justified), and they are used to assess whether they
lead to similar or contradictory interpretations. The researcher may design
a study, for instance, to explore student engagement. A quantitative survey
might measure levels of participation, while qualitative focus groups explore
the reasons behind disengagement. If the two forms of data converge, this
strengthens the validity of the findings. If they diverge, that too is
meaningful, suggesting areas of complexity or contradiction worth deeper
exploration.

Importantly, this design requires a plan for integration. Integration happens


when the two data sets are brought together—through side-by-side
comparison, narrative weaving, or the use of joint displays (e.g., tables that
show how qualitative themes align or contrast with quantitative statistics).
Creswell and Creswell emphasize that without integration, the design is not
truly “mixed”—it becomes just two separate studies under one roof. The
convergence must be intentional and meaningful.

The convergent design is efficient and works well when the researcher has
equal expertise in both methods. However, it does demand the capacity to
analyze two types of data simultaneously and the ability to resolve
differences if findings conflict. It also requires sufficient sample size and
data quality on both sides to be credible.

The Explanatory Sequential Design: Numbers First, Stories


Later

The explanatory sequential design is a two-phase approach where the


researcher first collects and analyzes quantitative data, and then follows up
with qualitative data to help explain or interpret the initial results. The
logic here is deductive: you begin with a theory or hypothesis, test it with
numbers, and then dig deeper through interviews, focus groups, or other
qualitative means to make sense of the patterns you found.

Creswell and Creswell note that this design is especially powerful when
quantitative results raise questions that need human insight to answer. For
instance, a survey might reveal that male students report significantly lower
motivation than female students in English language classes. But the numbers
alone don’t explain why. A second qualitative phase might include interviews
with male students to explore what disengages them, what they value, or
how they perceive classroom dynamics.

This design has clear procedural separation—data collection and analysis


happen in sequence, not simultaneously. The initial quantitative findings
guide the selection of participants and the formulation of questions in the
qualitative phase. That’s what gives the explanatory sequential design its
strength and flexibility—you can tailor your qualitative phase based on
exactly what the quantitative data showed.

Integration typically occurs in the interpretation stage, where the researcher


links the statistical results to the qualitative explanations. For example, tables
or graphs might be annotated with qualitative quotes, or each quantitative
finding might be followed by a narrative exploration.

This design is particularly appealing to researchers who are more familiar


with quantitative methods but want to deepen or humanize their findings
with qualitative insight. It is also widely used in program evaluation,
educational research, and public health studies, where large-scale data
needs to be interpreted in light of participant experience.

The Exploratory Sequential Design: Stories First, Numbers


Later

In contrast to explanatory designs, the exploratory sequential design begins


with qualitative data collection and analysis, followed by a quantitative
phase that builds upon the initial findings. This approach is inductive—it
seeks to explore a topic or phenomenon with little prior theory, and then use
that initial exploration to develop instruments, generate variables, or test
new ideas quantitatively.

Creswell and Creswell point out that this design is particularly useful when
little is known about a population or process. For example, if researchers
want to understand the barriers faced by rural Moroccan students when
accessing online education, they may first conduct interviews to explore
themes such as internet access, language challenges, or family support.
Those themes can then be operationalized into variables, which can be used
to construct a survey or assessment tool for broader quantitative testing.
The strength of this design lies in its developmental power: it allows
researchers to build culturally appropriate, context-sensitive, and
empirically grounded quantitative instruments. But the design also has
limitations—it can be time-consuming, especially when building and
validating new instruments. It also requires strong qualitative skills upfront
and the capacity to translate open-ended insights into structured,
measurable variables.

Integration in this design often happens during the instrument development


or interpretation phases, where the researcher shows how the qualitative
findings informed the quantitative tool, and how the quantitative results
confirm or complicate the earlier insights.

The Embedded Design: Supporting One Method with the Other

The embedded mixed methods design is structured around a primary


method, either qualitative or quantitative, within which a secondary method
is nested. The secondary method does not stand on its own—it is used to
support, enhance, or expand the findings or procedures of the main method.
According to Creswell and Creswell, this design is particularly useful when a
study primarily follows one methodological tradition, but the researcher
believes that integrating an additional strand of data could provide extra
depth, insight, or clarification.

For example, suppose a researcher is conducting a large-scale quantitative


study evaluating the impact of a new curriculum on student test scores. While
the survey data will show whether scores improved, the researcher may
embed a qualitative component—such as a few teacher interviews—to
understand how the curriculum was implemented or why it worked differently
across classrooms. Here, the qualitative data is embedded within a
quantitative framework, giving meaning and context to the numeric
outcomes.
Alternatively, the embedding can work in the opposite direction. A
qualitative case study may embed a small quantitative strand—like a short
pre- and post-survey—to add an empirical dimension to an otherwise
interpretive project.

The key to this design, Creswell and Creswell emphasize, is that the two
strands are not equal in priority. The embedded method exists in service to
the larger design, and the timing of data collection is often simultaneous
or sequential, depending on the research question. Integration often occurs
during interpretation, where the embedded findings are used to clarify or
qualify the primary findings.

The Transformative Design: Social Justice Through


Methodological Integration

The transformative design, which Creswell and Creswell present as one of


the most ethically and politically motivated forms of mixed methods
research, emerges from critical theory traditions—particularly those
concerned with social justice, power, and the representation of
marginalized voices. In this design, the use of mixed methods is not only
methodological but ideological. The research is guided by a transformative
worldview—such as feminist theory, critical race theory, queer theory,
disability studies, or decolonial perspectives.

In this design, qualitative and quantitative methods are both mobilized in the
service of challenging inequality, amplifying excluded perspectives, and
producing actionable knowledge. The transformative design might take the
form of any of the core mixed methods models (convergent, sequential, etc.),
but it is distinguished by its intent: the researcher is committed not just to
understanding the world but to changing it.
For instance, a researcher might conduct a convergent mixed methods study
on language policy in schools, collecting survey data from administrators and
interviews with students from minoritized backgrounds. The goal would be
not just to explore patterns and experiences, but to make those findings
visible to policymakers, interrupt dominant narratives, and advocate for
change in language education.

Creswell and Creswell stress that researchers using this design must be
explicit about their theoretical lens, their ethical commitments, and their
plans for dissemination and action. This design often requires community
partnerships, collaborative research strategies, and reflexive
accountability throughout the research process. It is the most politically
conscious form of mixed methods research.

The Experimental Mixed Methods Design: Strengthening


Causality with Context

The experimental design in mixed methods research begins with a


randomized or quasi-experimental intervention—a design common in
health sciences, education, and psychology—and adds a qualitative
component to enhance understanding. The quantitative phase tests causal
hypotheses through manipulation and control, while the qualitative phase
helps explain mechanisms, expose participant experiences, or understand
implementation challenges.

For example, a researcher might run a controlled trial comparing the effects
of a mindfulness program on student anxiety levels. The primary data comes
from standardized scales, analyzed statistically. However, the researcher also
interviews participants to learn how they perceived the program, what
challenges they faced, and how they applied what they learned in their daily
lives. The qualitative data provides depth and narrative richness, revealing
why the program worked better for some students than others.
Creswell and Creswell highlight that integration often occurs after the
intervention, during interpretation, though it may also influence design
adjustments during the study. This design strengthens both internal validity
(through control and randomization) and ecological validity (through
contextual qualitative insights). It is especially powerful for evaluation
research, where understanding why an intervention works is as important as
whether it works.

The Case Study Mixed Methods Design: Bounded Depth with


Breadth

Finally, the case study mixed methods design uses mixed methods strategies
to explore a bounded system—a single case or multiple cases—by
integrating both qualitative and quantitative data to understand its
complexity from multiple angles. This design allows for rich description of
the case’s internal dynamics while also supporting comparative or
generalizable conclusions.

A mixed methods case study might, for instance, examine how a single high
school implements inclusive education policies. The researcher could collect
qualitative data through interviews, document analysis, and classroom
observations to understand how inclusion is practiced, while also collecting
quantitative data such as performance metrics or attendance rates to
measure student outcomes.

The emphasis here is on deep immersion in a specific case, but with the
added value of data integration across paradigms. The goal is not just to tell
a story but to support that story with measurable evidence, or to use the
story to explain the meaning of the numbers.

Creswell and Creswell underline that in this design, the researcher must
clearly define:
● The case itself,

● The boundaries of time, place, or activity,

● The types of data collected,

● And how those data will be integrated and interpreted to represent


the complexity of the case.

Final Guidance: Writing the Mixed Methods Procedures Section

In the final pages of Chapter 10, Creswell and Creswell bring all these
models together by offering guidance on how to actually write a mixed
methods procedures section. They stress clarity, structure, and full
explanation. A well-written procedures section must:

● Clearly state that the design is mixed methods;

● Name the specific design (e.g., convergent, explanatory sequential,


transformative embedded, etc.);

● Justify the use of mixed methods based on the research questions;

● Indicate the priority (which method, if any, dominates);

● Explain the timing (concurrent or sequential);

● Describe the integration strategy (where and how the data are mixed);

● And specify the philosophical or theoretical framework, if relevant.


In short, the mixed methods section is a methodological map. It must show
the reader that the researcher knows why they are mixing methods, how
they will do it, and what each method will contribute to answering the
research question. It is a place of design logic, not improvisation.

Writing the Proposal: Structure, Argumentation, and


Intellectual Integrity

The chapter begins with a powerful reminder that before you even design
your research, you need to start writing it out. Writing isn’t just a form of
recording ideas—it’s a process of thinking them through. Creswell and
Creswell start by pointing out that any proposal, regardless of whether it is
qualitative, quantitative, or mixed methods, must offer the reader a clear
argument for why the research matters, how it will be done, and why it will
be credible.

They draw on Maxwell’s (2013) model, which organizes the central


intellectual commitments of a proposal into nine fundamental questions.
These include: what do readers need to know and understand about your
topic? What are you proposing to study? Who are the participants? What
methods will you use, and how will you validate your findings? What ethical
issues are involved? Each of these questions reflects a pillar of scholarly
rigor, and a well-developed proposal addresses all of them—not as a
checklist, but as an integrated and coherent argument.

The authors insist that every proposal, no matter the design, must tell a
logical story. That story is not about the researcher; it is about the research
problem, its urgency, its significance, and how it can be meaningfully
explored. This framing sets the tone for the rest of the chapter, which unfolds
into concrete writing strategies for qualitative, quantitative, and mixed
methods proposals, followed by deep guidance on ethical integrity.
Formats for Different Methodological Approaches

Creswell and Creswell then walk us through three sample proposal


formats—one for qualitative research, one for quantitative, and one for
mixed methods. These aren’t rigid templates; they are intellectual
frameworks that help you structure your argument logically and
appropriately, based on the assumptions of your approach.

In qualitative research, they provide two examples: one grounded in


constructivist/interpretivist traditions, and one in participatory–social
justice paradigms. Both include sections like the introduction, statement of
the problem, purpose, research questions, design, researcher role, data
collection and analysis, ethical issues, validation strategies, and significance
of the study. The social justice model adds layers like addressing power,
marginalization, and anticipated social transformation. This makes it clear
that even the structure of your proposal reflects your philosophical
worldview.

The quantitative format is more standardized—following the IMRD model:


Introduction, Methods, Results, Discussion. But Creswell and Creswell
expand on this by advising researchers to clearly identify their theoretical
framework, define variables, describe instruments, explain statistical
procedures, and anticipate ethical issues such as informed consent, data
handling, and objectivity. While more rigid than qualitative formats, the
quantitative proposal still requires a carefully structured argument for
credibility.

The mixed methods format is the most complex, and rightly so. The authors
provide a comprehensive outline that includes both sets of
components—quantitative and qualitative—as well as integration points, a
clear rationale for mixing, a visual diagram of procedures, and explanation
of timing, priority, and philosophical alignment. Mixed methods research
is not just about collecting two types of data. It’s about methodological
coherence.

Designing Sections and Writing with Purpose

After outlining the structures, the authors offer specific advice on writing
each section of a proposal. They encourage students to start writing early
and often, not to wait until everything is clear in their mind. Writing, in this
view, is a form of discovery. You write to think, not only to express what
you already know.

just get the first draft down, messy as it may be. Then revise. Then polish.
The point here is that the academic writing process should be iterative, and
writing is itself a cognitive activity that brings your ideas into clearer focus.

They recommend that researchers start with an outline, then draft sections
quickly, and then refine. You don’t need to be perfect at first. But what you
do need is structure, clarity, and consistency.

They also emphasize the importance of finding examples. Look at proposals


written by others. Ask your advisor for models. Study how they structure
arguments, how they cite literature, and how they organize their ideas.
Especially for graduate students, seeing examples can give you both
confidence and inspiration.

Writing as Thinking: Style, Voice, and Flow

This section of the chapter turns the focus inward, examining what makes
writing not just functional, but effective and scholarly. Creswell and
Creswell argue that writing is a form of thinking, and that good writing
helps both the reader and the writer clarify the research’s purpose, logic, and
implications.
They offer insights drawn from literary writers, including Stephen King,
William Zinsser, and Annie Dillard, who stress the importance of developing
a writing habit, using concrete details, avoiding overwriting, and editing
for clarity and energy. These aren’t just writing tips—they’re strategies to
make your proposal readable and persuasive.

A great deal of attention is paid to the idea of coherence—the glue that holds
your manuscript together. This includes consistent use of terms, clear
transitions between sections, and paragraphs that build logically from
sentence to sentence. They introduce the “hook-and-eye” technique:
imagining each sentence as linked to the next, pulling the reader forward. If a
sentence doesn’t logically follow the one before, or if a paragraph jumps to a
new idea without transition, coherence is lost and the reader’s trust begins to
erode.

They also discuss voice and tense. Active voice is favored over passive voice
because it creates clarity and momentum. Strong verbs are better than
abstract nouns. Verb tense should match the part of the proposal: use past
tense for literature and completed studies; present or future tense for your
plans. Mixed methods researchers must be especially careful to signal which
part of the study is being discussed and which tense matches it.

Polishing your writing also means trimming the fat: eliminating unnecessary
qualifiers, modifiers, and jargon. Say what you mean directly. Don’t inflate
your language to sound “academic”—clarity and simplicity are the hallmarks
of excellent scholarly writing.

Ethical Considerations: Integrity in Research Practice

The second half of the chapter addresses the issue of ethics in research.
Creswell and Creswell treat this not as an afterthought, but as a foundation
of all good research. Ethics, they insist, is not just about avoiding
misconduct. It is about honoring your participants, your readers, and the
research community through transparency, fairness, and responsibility.

They outline specific ethical concerns that must be addressed at every phase
of research:

● Before the study: Obtain informed consent, apply to the Institutional


Review Board (IRB), consider vulnerable populations, and anticipate
cultural sensitivities.

● During the study: Avoid deception, treat participants with respect,


manage power dynamics, prevent exploitation, and ensure reciprocity
(especially in participatory research).

● After the study: Safeguard confidentiality, store data responsibly,


avoid plagiarism, report results honestly (including negative findings),
and consider how your work affects participants, communities, and
policy.

They also discuss issues like authorship, data ownership, conflict of


interest, and the consequences of falsifying or suppressing data. These are
real problems in academia, and researchers must be proactive in anticipating
and preventing them.

Ethics is not just about ticking boxes on a form. It is a continuous process of


reflection, shaped by the researcher’s values and the study’s context. It
requires vigilance and humility.

The chapter concludes with a call to develop your own writing practice, to
write every day, to reflect often, to stay engaged with the process. Read good
research. Read good writing. Write proposals not only to fulfill requirements,
but to craft compelling, coherent, and ethical studies.
Creswell and Creswell remind us that every decision we make in
writing—from choosing a topic to structuring our argument to citing a
source—is a reflection of our intellectual and ethical identity. A good
proposal is not just about research design—it is about communicating with
integrity, clarity, and care.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy