The Quantitative Qualitative Distinction
The Quantitative Qualitative Distinction
The Quantitative Qualitative Distinction
4, 2006
NOTES
1. There are many textbooks providing introductions to research methods for students of education
and the social sciences. Burns (2000) is typical of the genre and is widely used. We have focused
our remarks on it for the sake of definiteness.
2. This form is recognisable as the argument schema known in the traditional formal logic of the
Middle Ages as Barbara with singular minor. An argument is deductively valid just in case it is
logically impossible for all the premisses to be true and the conclusion false. Traditional
syllogistic can represent some, but not all, valid deductive arguments.
3. Since Burns will later cite Popper with approval for endorsement of the scientific approach, it may
be worth noting Popper’s own account of this ‘firm basis’, which differs starkly from Burns’s:
‘The empirical basis of objective science has thus nothing ‘‘absolute’’ about it. Science does not
rest upon solid bedrock. The bold structure of its theories rises, as it were, above a swamp. It is
like a building erected on piles. The piles are driven down from above into the swamp, but not
down to any natural or ‘‘given’’ base; and if we stop driving the piles deeper, it is not because we
have reached firm ground. We simply stop when we are satisfied that the piles are firm enough to
carry the structure, at least for the time being’ (Popper, 1959, §30, p. 111). Burns does not cite this
work, though it is of course the central text for Popper’s views on scientific method.
4. Mathematical research is certainly not quantitative in the sense defined.
5. And even when they seem to, this may be a misleading appearance of the conventions for writing
up research insisted upon by editors of scientific journals. ‘[T]he scientific paper is a fraud in the
sense that it does give a totally misleading narrative of the processes of thought that go into the
making of scientific discoveries’ (Medawar, 1963, p. 38).
6. One recalls the boast by a rural radio station that it plays both kinds of music, Country and
Western.
7. Burns defines: ‘power is the ability of a statistic to correctly reject the null hypothesis when it is
false’ (2000, p. 160).
8. The N-P [Neyman-Pearson] theory refers to the theory developed by Jerzy Neyman and Egon
Pearson. The N-P theory departs from Fisher’s idea of hypothesis testing (of considering the
probabilistic behaviour of just one hypothesis), in that it also recognises the need to include the
alternative hypothesis together with the possibility of the errors of wrong decisions about which of
the hypotheses could be true. Under the N-P theory the error of rejecting a true null hypothesis is
called an error of the first kind (type I error), while the error of accepting or failing to reject a false
null hypothesis is called an error of the second kind (type II error). The N-P theory uses the
criterion of what it calls the critical region. If the value of the test statistic, as calculated from the
sample, falls into that region then the null hypothesis, according to which such an event is deemed
improbable or infrequent, is regarded as rejectable—albeit with the possibility of a type I error. In
addition the N-P theory also discusses the idea of the power of a test, which is the probability or
(limiting relative) frequency with which a test would correctly reject a false null hypothesis.
9. Work for this paper was hindered by the inadequate funding of Australian academic libraries.
REFERENCES
Burns, R. B. (2000) Introduction to Research Methods, 4th edn (Sydney, Longmans Pearson
Education Australia).
Carver, R. (1993) The Case Against Statistical Significance Testing, Revisited, Journal of
Experimental Education, 61, pp. 287–292.
Cohen, J. (1994) The Earth is Round (po.05), American Psychologist, 49, pp. 997–1003.
Ernest, J. M. and McLean, J. E. (1998) Fight the Good Fight: A Response to Thompson, Knapp,
and Levin, Research in the Schools, 5.2, pp. 59–62.
Fisher, R. A. (1933) The Contributions of Rothamsted to the Development of the Science of
Statistics. Annual Report of the Rothamsted Station, pp. 43–50. (Reprinted in his Collected
Papers, ed. J. H. Bennett, vol. 3 (Adelaide, University of Adelaide Press), pp. 84–91).
Fisher, R. A. (1935) The Design of Experiments (Repr. Edinburgh; Oliver & Boyd, 8th edn, 1966).
Fisher, R. A. (1939) ‘Student’, Annals of Eugenics, 9, pp. 1–9 at hhttp)//www.library.adelaide.
edu.au/digitised/fisher/165.pdfi (Reproduced with permission of Cambridge University Press).
Frick, R. W. (1996) The Appropriate Use of Null Hypothesis Testing, Psychological Methods, 1.4,
pp. 379–390.
Gigerenzer, G. (1993) The Superego, the Ego, and the Id in Statistical Reasoning, in: G. Keren and
C. Lewis (eds) A Handbook for Data Analysis in the Behavioral Sciences: Methodological
Issues (Hillsdale, NJ, Lawrence Erlbaum Associates), pp. 311–339.
Gliner, J. A., Leech, N. L. and Morgan, G. A. (2002) Problems with Null Hypothesis Significance
Testing (NHST): What do the Textbooks Say?, Journal of Experimental Education, 71.1,
pp. 83–92.
Gorard, S., Prandy, K. and Roberts, K. (2002) An Introduction to the Simple Role of Numbers in
Social Science Research. ESRC (Economic and Social Research Council) Teaching and
Learning Research Programme, Research Capacity Building Network, Occasional Paper Series,
Paper 53 at hhttp://www.cf.ac.uk/socsi/capacity/Papers/roleofnumbers.pdfi.
Hacking, I. (1965) Logic of Statistical Inference (Cambridge, Cambridge University Press).
Hubbard, R. and Bayarri, M. J. (2003) Confusion Over Measures of Evidence (p’s) versus Errors
(a’s) in Classical Statistical Testing, The American Statistician, 57.3, pp. 171–182.
Kerlinger, F. (1986) Foundations of Behavioral Research (New York, Holt).
Lakatos, I. (1973) Lectures on Scientific Method, in: M. Motterlini (ed.) For and Against Method
(Chicago, University of Chicago Press).
Medawar, P. (1963) Is the Scientific Paper a Fraud? Repr. in his The Strange Case of the Spotted
Mice and Other Classic Essays on Science (Oxford, Oxford University Press), 1996, pp. 33–39.
Meehl, P. E. (1978) Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow
Progress of Soft Psychology, Journal of Consulting and Clinical Psychology, 46.4, pp. 806–834.
Neumann, J. von (1947) The Mathematician, in: R. B. Heywood (ed.) The Works of the Mind
(Chicago, University of Chicago Press).
Neyman, J. (1957) Inductive Behavior as a Basic Concept of Philosophy of Science, International
Statistical Review, 25, pp. 7–22.
Pearson, E. S. (1962) Some Thoughts on Statistical Inference, Annals of Mathematical Statistics,
33, pp. 394–403.
Polanyi, M. (1962) Personal Knowledge: Towards a Post-Critical Philosophy (London, Routledge
& Kegan Paul).
Popper, K. R. (1959) The Logic of Scientific Discovery (Logik der Forschung, 1934) (London,
Hutchinson).
Rosnow, R. L. and Rosenthal, R. (1989) Statistical Procedures and the Justification of Knowledge
in Psychological Science, American Psychologist, 44, pp. 1276–1284.
Royall, R. (1997) Statistical Evidence—a Likelihood Paradigm (Boca Raton, FL, Chapman and
Hall).
Rozeboom, W. W. (1960) The Fallacy of the Null-Hypothesis Significance Test, Psychological
Bulletin, 57, pp. 416–428.
Salsburg, D. S. (1985) The Religion of Statistics as Practiced in Medical Journals, American
Statistician, 39, pp. 220–223.
Sober, E. (2002) Intelligent Design and Probability Reasoning, International Journal of
Philosophy of Religion, 52, pp. 65–80.
Sohn, D. (2000) Significance Testing and the Science [Comment], American Psychologist, 55.8,
pp. 964–965.
Wang, C. (1993) Sense and Nonsense of Statistical Inference: Controversy, Misuse, and Subtlety
(New York, Marcel Dekker).