pr2 c4 ls6
pr2 c4 ls6
A measure based on a population is called a parameter while a measure based on a sample is called a
statistic. The population mean is called a parameter and is represented by the symbol "μ" ( a Greek
letter). The sample mean is a statistic and is represented by x̄.
Inferential statistics requires that the sample be drawn by random sampling. If there is bias in sampling.
It is possible for the inference to be wrong. To determine if the inference is valid, testing the statistical
significance is very important. To be statistically significant, any relationship or difference must be due
to planned interventions rather than by chance.
Statistical significance
Statistical significance is a concept that dictates whether conclusions derived from a data set cannot be
the outcome of chance. (helps to determine whether a study's results are "real" and trustworthy.)
The p value, or probability value, tells you the statistical significance of a finding. In most studies, a p
value of 0.05 or less is considered statistically significant, but this threshold can also be set higher or
lower.
A test statistic that indicates how closely your data match the null hypothesis.
A corresponding p value that tells you the probability of obtaining this result if the null hypothesis is
true.
The p value determines statistical significance. An extremely low p value indicates high statistical
significance, while a high p value means low or no statistical significance.
How do you test for statistical significance? (ikaw bahala kon butangon mo ini)
In quantitative research, data are analyzed through null hypothesis significance testing, or hypothesis
testing.
- Hypothesis testing always starts with the assumption that the null hypothesis is true.
To begin the research predictions, it rephrased into two main hypotheses: the null and alternative
hypothesis.
• A null hypothesis (H0) always predicts no true effect, no relationship between variables, or no
difference between groups.
• An alternative hypothesis (Ha or H1) states your main prediction of a true effect, a relationship
between variables, or a difference between groups. (it nga "a" tas "1", ada iton ubos, more likely nga
mas guti hya han normal basta)
e.g. Formulating a null and alternative hypothesis. You design an experiment to test whether actively
smiling can make people feel happier. To begin, you restate your predictions into a null and alternative
hypothesis.
H0: There is no difference in happiness between actively smiling and not smiling.
There are two types of errors involved with hypothesis testing. Type I error is committed when a
researcher rejected a null hypothesis when in fact it is true. The second type of error, Type II error, is the
error that occurs when the data from the sample produce results that fail to reject the null hypothesis
when in fact the null hypothesis is false and should be rejected.
Nonparametric tests do not specify normally distributed populations and similarity of variances.
Nonparametric tests are the only tests used with nominal data or ordinal data.
The main concern or idea in hypothesis testing is to ensure that what is observed from sample data and
generalized to population phenomena is not due to chance.
The following outlines the steps in hypothesis testing in any given situation.
1. State the null hypothesis. The null (H₀) hypothesis is a statement that no difference exists between
the averages or means of two groups.
2. Choose the statistical test and perform the calculation. A researcher must determine the
measurement scale, the type of variable, the type of data gathered and the number of groups or the
number of categories.
3. State the level of significance for the statistical test. The level of significance is determined before the
test is performed. It has been traditionally accepted by various schools of thought to use alpha (α), to
denote the level of significance in rejecting the null hypothesis. It is equivalent to the amount of risk
regarding the accuracy of the test that the researcher is willing to accept. The levels most frequently
used are .05, .01, and .001. An α level of significance implies that the probability of committing an error
by chance is 5 in 100.
4. Compute the calculated value. Use the appropriate formula (discussed in Lesson 5) for the significance
test to obtain the calculated value.
5. Determine the critical value the test statistic must attain to be significant. After you have computed
the calculated measure, you must look at the critical value in the appropriate table for the distribution.
The critical value defines the region of rejection from the region of acceptance of the null hypothesis.
6. Make the decision. If the calculated value is greater than the critical value, you reject the null
hypothesis. If the critical value is larger, you conclude that you have failed to reject the null hypothesis.