Cooking Perfect Cupcakes: Freeing Curricula Context Gives Student-Centred Pedagogy For Course On Experimental Design
Cooking Perfect Cupcakes: Freeing Curricula Context Gives Student-Centred Pedagogy For Course On Experimental Design
Cooking Perfect Cupcakes: Freeing Curricula Context Gives Student-Centred Pedagogy For Course On Experimental Design
http://www.scirp.org/journal/ce
ISSN Online: 2151-4771
ISSN Print: 2151-4755
See Joiner (1999); Aldis, Sidhu, & Joiner (1999); and Joiner, Malone, & Haimes (2002).
1
Keywords
Inclusive Curriculum, Student-Centred Learning, Socialized Learning, Design
of Experiments
1. Article Structure
The article begins with three background sections: first a short general back-
ground on the University context that led to the course being developed; second
a background on the course learning objectives, structure and the topics students
chose freely for their individual research; and third a more traditional literature
review of the inclusive pedagogy employed for the course. The research method
used in this case study was direct observation of the students’ enthusiasm and
success, made possible because the teacher was also an experienced educational
researcher and the number of students was small.
The article proper then showcases the stages in the investigation of cupcake
baking, starting with setting up the experiment and how to judge the cupcakes
(measurement system analysis) which are shown in Figure 1.
Once the measurement system was suitable, the process screened the signifi-
cant cupcake baking factors (i.e. oven temperature) from all the possible factors,
and then involved a detailed modelling test and optimization of settings for the
best cupcake baking. The article finishes with two conclusions, one for cupcake
baking and the other for the benefits of inclusive curriculum design through
2. General Background
The University of NSW has developed one of Australia’s largest and most popu-
lar higher degree programs in systems engineering and project management, le-
veraging a close relationship with the Australian Department of Defence (DoD)
that formed initially around the delivery of undergraduate programs to the Aus-
tralian Defence Force Academy in Canberra, but which has since evolved to all
aspects of normal university offerings. Most of the higher-degree coursework
students are Defence Force members or public servants studying part-time. As a
consequence, the technological contexts used are often skewed heavily towards
ships, aircraft, missiles, land vehicles and other warfare fields like cybersecurity.
Also, the STEM trends in the U.S. DoD are always closely monitored and fol-
lowed wherever possible, as these fields are relevant to DoD students because of
the close alliance between the two countries and the fact that much of Australia’s
DoD materiel is procured from the U.S. (Defence, 2014). Notwithstanding these
general contextual tendencies, the University has worked to broaden the appeal
of its degrees, so as to help attract and retain non-traditional parts of society to
STEM subjects and careers. Such appeal is considered critical because Australia
has a low population and yet an enormous area to defend, and it has an aging
population [2], meaning that the DoD must compete for a smaller work-ready
population, especially the school-leaver population (Defence, 2016: 150-152).
The University of NSW’s Master of Systems Engineering and Master of Project
Management are popular coursework higher degrees that have as one of their
core subjects an introduction to test and evaluation (T&E). Despite many years
of growth in these courses, they did not have any advanced elective subjects in
T&E techniques. Recent comparison of Australia’s rigour in T&E compared to
the U.S. found that the U.S. DoD had prescribed and implemented new ad-
vanced scientific techniques and competencies [3]. As such, the UNSW Australia
partnered with Air Academy Associates, LLC (AAA) in the U.S., who are a lead-
ing research and training firm in these advanced T&E techniques, to adapt one
of their courses to the University’s Master programs. The resulting elective sub-
ject, known as ZEIT 8034 Advanced T&E Techniques, was developed in October
2015 to February 2016 and trialed for the first time in Semester One of 2016
(February to June) with 15 volunteer students (4 female, 11 male). The subject
has successfully run twice since the inaugural trial course.
Figure 2. Illustration of the Statapult® Catapult system shown in use by one of the au-
thors.
There is a 15-minute video compilation of the course at the following link that
shows the Statapult® in use with student testimonials about what they learnt in
various parts of the course:
https://www.unsw.adfa.edu.au/capability-systems-centre/advanced-test-and-e
valuation-techniques#overlay-context.
At the end of the intensive week, students propose to the teacher and their
peers a system that they will have access to and that they want to screen and
model using their new test design and analysis techniques. Topics chosen by the
students in this inaugural class are listed in Table 1, where a third of students
did work-related topics, just over half did hobbies or interests and the remaining
two students (13 percent) chose topics for easy testing.
Such context choices are intended to meet the challenge of making tertiary
education internationally inclusive by “developing curricular contexts that ex-
tend themselves meaningfully into the personal life-worlds (i.e. environment
from the perspective of an individual) of students” (Rasi, Hautakangas, & Väy-
rynen, 2015: 134). Oral presentation and group discussion of those personal
choices to their peers allowed students to “connect the theories, concepts and
issues being taught to their life-worlds” (p. 139) and thus be more inclusive in
many different ways.
Students undertook their personal research over the following months at work
or home with some mentoring at key times by the teacher. Students also under-
take a knowledge quiz on the key concepts. Assessment was divided into: 35
percent for the collaborative assignment report concerning the Statapult® system,
15 percent for the computerized knowledge quiz and 50 percent for the individ-
ual research and report.
Male PhD topic Effect of Supply Chain disruption on Capital Project Success
Figure 3. A fun collaborative activity was the capstone timing of accurate hits on castle
walls.
students who are keener in this type of exploratory learning. What breaks down
such barriers more than anything else once they are doing their own research
context, is that they are the expert on their chosen system, and they enjoy teach-
ing the teacher about their chosen system in the process of getting any help they
might need.
Start
Yes
Do you need Preheat pan for 10
to preheat minutes Taste test
oven?
(X) Oven
(X) Fill of pan temperature
(X) Position in
oven
Output
(N) Outside
temperature
(C) Ingredients
(C) Baker
(N) Humidity
(C) Recipe
Figure 5. Cause and effect diagram for cupcake baking (source: SPC XLTM (iv)).
making batter
Ingredients
Method of
Recipe
Baker
Oven
Preheat Oven – (yes, no)
Temperature of Oven – (140, 180 deg.C)
Oven setting – (fan forced, no fan) Moisture
Fill of pan – (half, full) Appearance
Baking a cupcake
Size of pan – (small, large)
How well cooked
Cooking Time – (10, 20 min’)
Preheat pan – (fully, 10 min’, cold pan)
Position in oven – (top, bottom)
Air temp’
Humidity
Figure 6. Experimental Design Diagram for Cupcake Baking (source Microsoft Power-
pointTM).
three response outputs for the system ready to do a screening test. Shown for the
input factors are the high and low settings (i.e., two-level) such as ten to twenty
minutes for cooking time and 140 to 180 degrees Celsius for oven temperature.
The output responses chosen needed to be measured consistently by judges. The
grading index shown in Table 2 was used to ensure all judges were aware of
what qualified for each rating. This was done to attempt to reduce the amount of
variation within the measured results. By using this Likert scale approach it al-
lows the qualitative data due to human perspective and preference to be treated
somewhat as quantitative data. The “moisture” and “how-well cooked ” outputs
scales have an optimum in the centre (3) whereas the optimum for the “appear-
ance” output scale is five.
Standard operating procedures (SOPs) were developed for each of the con-
trolled variables so as to minimize unwanted variation; principal among these
was the all-important recipe from Australian Good Taste (2016) and the one
home oven as given in Figure 7.
Moisture
5 Dry
4 Quite even but crumbly, lack of moisture
3 Evenness all the way through, perfect crumble and moist
2 Very moist and very dense
1 Overly gooey, fondant like texture
Appearance
5 Perfectly risen, symmetrical, homogeneous
4 Well risen but imperfect symmetry
3 Slightly risen
2 Only risen on edges
1 Not risen at all
How-well cooked
5 Burnt
4 Overdone-dry
3 Perfectly cooked
2 Slightly raw
1 Raw
nary data according to the course texts the number of people multiplied by the
number of parts must ideally be greater than or equal to 60. As such the MSA
was conducted with 6 judges (operators) and 10 different bake settings (parts)
and replicated twice to determine the variation. The raw MSA data and ANOVA
analysis of the MSA results for “appearance” are shown as an example in Table 3
and Table 4.
The precision-to-total of the ANOVA for the MSA of the “appearance” grading
lies between 0.10 and 0.30 and is therefore sufficient to proceed to testing in ac-
cordance with the course texts. Also the resolution is greater than five (6.8) and
is therefore adequate to proceed. The operator-to-part interaction is high (96%)
showing some bias towards certain cupcakes dependent on their personal prefe-
rence. The consistency in judges’ ratings is shown in Figure 8 across the ten
Figure 8. Variation in six judges’ ratings of cupcake “Appearance” for ten bakings
(source: DOE XLTM (iv)).
Table 3. MSA data for the “Appearance” scale for the six judges (source: SPC XLTM (iv)).
Table 4. ANOVA for MSA for “Appearance” (source: SPC XLTM (iv)).
bakes, illustrating that judges mainly rated differently in Baking 1 and Baking 2
but consistently thereafter.
The other rating scales were also found to be sufficiently accurate and consis-
tent to proceed to the screening test:
Table 5. Taguchi L12 screening design used for cupcake screening (source: DOE XLTM (iv)).
Factor A B C D E F G H Moisture
Preheat Oven Size of Cooking Preheat Position in
Row # Temperature Fill of pan Y1 Y2 Y3 Y4
oven Setting pan Time Pan oven
1 0 140 0 0.5 0 10 0 0 3 3 3 3
2 0 140 0 0.5 0 20 1 1 4 4 4 4
3 0 140 1 1 1 10 0 0 1 1 1 1
4 0 180 0 1 1 10 1 1 1.167 1 1 1
6 0 180 1 1 0 20 1 0 3 3 3 3
7 1 140 1 1 0 10 1 1 1 1 1 1
8 1 140 1 0.5 1 20 1 0 1 1 1 1
9 1 140 0 1 1 20 0 1 2 2 2 2
10 1 180 1 0.5 0 10 0 1 1 1 1 1
12 1 180 0 0.5 1 10 1 0 1 1 1 1
ponses and both the average and variation in each distribution has to be consi-
dered, some 24 analyses occurred—hence only an example of each is shown
here.
The marginal means plots of the absolute rating value of “moisture” is shown
in Figure 9 produced by the eight input factors at their high and low values. It
shows that Preheat Oven, Oven Setting, Size-of-Pan and Cooking Time effect the
perfect cupcake, with Size of Pan having the greatest effect whereby the smallest
pan drives a higher moisture rating while the largest pan drives a lower moisture
rating. The marginal means plots for moisture variation are not shown but six of
the eight input variables effect the variation in moisture of each cupcake, with
Oven Setting, and Fill-of-Pan having the least influence.
The multiple response regression analysis for the absolute values of each out-
put cupcake rating is shown in part in Table 6, where the two-tailed significance
of each factor is shown with anything of significance (p < 0.05) shown in red and
anything likely to be significant (0.05 < p < 0.1) shown in blue. The size of the
non-dimensional coded coefficients directly illustrate the linear size of each fac-
tor’s effect relative to one another and this is shown graphically for the cupcake
“appearance” rating in Figure 10; again colour coding shows significance and
the oven setting and size-of-pan have the greatest effect.
The coded regression table (Table 6) can be decoded by DOE PRO XL® to
provide dimensional equations for both the rating absolute value and the varia-
tion. As an example, the simplistic linear equations for the three ratings are as
follows, where each factor is abbreviated to a letter (A to H) shown in Table 6:
“Moisture” Rating:
yˆ =1.910 − 0.611A − 1.069C − 1.389 E + 0.132 F − 0.319G − 0.389 H
“Appearance” Rating:
yˆ =0.979 − 0.819 A + 0.007 B − 1.403C
+ 1.028 D − 1.403E + 0.121F − 0.792 H
Figure 9. Marginal means plots for the effect of each input factor on absolute value of
cupcake “Moisture” grading (source: DOE XLTM (iv)).
Table 6. Multiple response regression analysis in screening for the absolute values of the three cupcake gradings (source: DOE
XLTM (iv)).
Y-hat Model
Moisture Appearance How well cooked
Factor Name
Coeff P(2 Tail) Tol Coeff P(2 Tail) Tol Coeff P(2 Tail) Tol
Const 2.000 0.000 − 2.410 0.000 - 2.069 0.000 -
A Preheat oven −0.306 0.000 1 -0.410 0.000 1 -0.257 0.000 1
B Temperature − − − 0.132 0.064 1 0.160 0.001 1
C Oven Setting −0.535 0.000 1 -0.701 0.000 1 -0.611 0.000 1
D Fill of pan − − − 0.257 0.001 1 0.201 0.000 1
E Size of pan −0.694 0.000 1 -0.701 0.000 1 -0.653 0.000 1
F Cooking Time 0.660 0.000 1 0.604 0.000 1 0.625 0.000 1
G Preheat Pan −0.160 0.000 1 - - - -0.188 0.000 1
H Position in oven −0.194 0.000 1 -0.396 0.000 1 -0.229 0.000 1
Figure 10. Pareto Plot of Input Factor Effects on Absolute Value of Cupcake “Appearance”
Grading (source: DOE XLTM (iv)).
close to each other (<0.9 rule-of-thumb) and reasonably high (lowest is 0.88),
such that the regression models are good fits (<0.7 rule-of-thumb).The tolerance
(Tol) in Table 6 is one for all factors showing the test was orthogonal.
The multiple response regression table for variation in each of the responses is
not shown but revealed the following regarding the most likely “spread shifters”:
Oven Temperature is likely significant on variation of “Moisture” rating.
Preheat Oven and Fill-of-Pan are significant on variation of “Appearance”
rating.
Oven Temperature is significant and Oven Position is likely significant on
variation of “How-well-cooked ” rating.
The final tool used was the DOE PRO XL® optimizer tool which involves set-
ting desired constraints, weighted as necessary to examine ideal settings, noting
that at this stage the model is only linear. The optimizer allows the ideal ratings
and a minimization of variance, however for simplicity here, only the optimiza-
tion results of the optimum ratings are shown in Table 7. In this table there are
two optimizations, one with all three ratings weighted equally at their ideal rat-
ing values and one for just optimizing “appearance” only. Clearly, if a cupcake
just has to look good, a higher baking temperature and longer cook time are
likely to give better results. From these investigations it appears to confirm there
is an optimum temperature between 140-180 degrees Celsius and an optimal
baking time between 10 and 20 minutes.
Preheat Oven 0 0
Oven Setting 0 0
Size of Pan 0 0
Preheat Pan 1 1
Position in Oven 0 0
benefits of the Box Behnken is that there is a reduced test demand compared to a
full factorial test, however, it still provides information on the main, two-way
and quadratic interactions. A disadvantage of this method is that it is unable to
show the three-way interactions. The setup of the four-factor Box Behnken can
be seen below in Table 8 and was used to measure all three outputs; “moisture”,
“appearance” and “how-well cooked ” with three repetitions of each test case, as
recommended by the text.
As recommended by the texts the origin point is repeated three times to help
with orthogonality, at test cases 9, 18 and 27, and the repeats are used to help
check the consistency of the testing. Once again, similar to the screening, five
Table 8. Four-factor, three-level, 27-test Box Behnken test design used to model in detail the cupcake baking (source: DOE XLTM
(iv)
).
Factor A B C D Moisture
Row # Preheat Temperature Time Fill Y1 Y2 Y3
1 0 160 15 0.75 3.4 3 3.2
2 0 180 15 0.75 2.9 3.2 3.05
3 1 160 15 0.75 3.4 3.7 3.5
4 1 180 15 0.75 4.4 4.2 4.3
5 0.5 170 10 0.5 3.3 2.8 3
6 0.5 170 10 1 2.4 2.4 2.4
7 0.5 170 20 0.5 5 5 5
8 0.5 170 20 1 4.5 4.8 4.7
9 0.5 170 15 0.75 3.9 3.8 3.8
10 0 170 15 0.5 3 3.4 3.2
11 0 170 15 1 3.2 2.4 2.8
12 1 170 15 0.5 5 3.8 4.4
13 1 170 15 1 3.5 3.1 3.3
14 0.5 160 10 0.75 2.2 2.4 2.2
15 0.5 160 20 0.75 4 4.1 4.1
16 0.5 180 10 0.75 2.9 2.6 2.8
17 0.5 180 20 0.75 5 5 5
18 0.5 170 15 0.75 4.1 4.2 4.15
19 0 170 10 0.75 1.1 1.1 1
20 0 170 20 0.75 3.8 3.6 3.85
21 1 170 10 0.75 3.6 3.3 3.5
22 1 170 20 0.75 4.8 4.6 4.7
23 0.5 160 15 0.5 3.5 3.6 3.5
24 0.5 160 15 1 3.2 3.2 3.1
25 0.5 180 15 0.5 4.8 4.1 4.4
26 0.5 180 15 1 4.4 3.6 4.1
27 0.5 170 15 0.75 3.6 3.7 3.6
judges were used to taste the cupcakes and record their outputs for the “appear-
ance”, “moisture” and “how-well cooked ” for each repetition of each cupcake.
This time a 0.5 mark was included between each grading to allow for better dis-
tinction between similar cupcakes.
So in total the number of cupcakes baked for the detailed modelling was 27 by
three or 81 bakes, with five scores for each bake making 405 test points—once
again a lot of free tasting and advertising of the test rigour process within the
workplace!
4.5
Test 18
Test 9
Test 27
3.5
Measurement
Figure 11. Test consistency at the test space origin for each cupcake rating (source: Mi-
crosoft ExcelTM).
Table 9. Multiple-response regression analysis from test results with insignificant factors and interactions removed (source: DOE
XLTM (iv)).
Y-hat Model
Moisture Appearance How well cooked
Factor Name
Coeff P(2 Tail) Tol Coeff P(2 Tail) Tol Coeff P(2 Tail) Tol
Const 3.838 0.000 − 3.459 0.000 - 3.833 0.000 -
A Preheat 0.553 0.000 1 0.747 0.000 1 0.475 0.000 1
B Temperature 0.318 0.000 1 −0.019 0.805 1 0.324 0.000 1
C Time 1.015 0.000 1 0.375 0.000 1 0.885 0.000 1
D Fill −0.269 0.000 1 −0.347 0.000 1 -0.106 0.020 1
AB 0.229 0.007 1 - - - 0.158 0.043 1
AC −0.363 0.000 1 -−0.958 0.000 1 -0.163 0.038 1
AD −0.175 0.036 1 0.333 0.017 1 - - -
CD − − − - - - 0.183 0.020 1
AA Preheat 2
−0.374 0.000 0.96 0.478 0.000 0.9 −0.449 0.000 0.96
BB Temperature2 − − − −0.264 0.020 0.9 - - -
CC Time2 −0.197 0.003 0.96 - - - −0.276 0.000 0.96
DD Fill2 − − − -0.722 0.000 0.9 - - -
“Appearance”:
−81.202 + 3.333 ⋅ Preheat + 0.895 ⋅ Temp′ + 0.267 ⋅ Time
yˆ =
+ 14.611 ⋅ Fill − Preheat [ 0.383 ⋅ Time − 2.667 ⋅ Fill]
+ 1.911 ⋅ Preheat 2 − 0.003 ⋅ Temp 2 − 11.556 ⋅ Fill2
“How-well cooked”:
−3.562 − 1.662 ⋅ Preheat + 0.017 ⋅ Temp′ + 0.431 ⋅ Time
yˆ =
− 2.622 ⋅ Fill + Preheat [ 0.032 ⋅ Temp′ − 0.065 ⋅ Time]
+ 0.147 ⋅ Time ⋅ Fill − 1.797 ⋅ Preheat 2 − 0.011 ⋅ Time2
These equations show three different four-dimensional spaces that are hard to
envisage. Fortunately DOE PRO XLTM provide some impressive tools to explore
the effect space that has been modelled. By this stage in their course students
usually know their test domain and the tools and they go competently to the op-
timizer and run “what if ” cases. Before showing that though, it is worth showing
two graphs that illustrate two aspects of the cupcake baking model. The first in
Figure 12 holds constant a cooking time of 15 minutes and a preheated pan in
order to show there is a fairly wide optimum of cooking temperature but a nar-
row optimum in pan fill whereby the pan needs about a 15 percent air gap from
the pan lip.
A second graph in Figure 13 tries to illustrate the complex effect of preheating
the pan by holding cooking temperature and cooking time constant. Both the
formulae and this “uneven saddle” graph show that preheating is the most com-
plex factor, being the most prevalent quadratic term and interaction term. Pre-
Figure 12. “Appearance” rating for cooking time of 15 minutes and a preheated pan
(source: DOE XLTM (iv)).
Figure 13. “Appearance” rating for a fixed cooking time of 15 minutes and cooking tem-
perature of 170 degrees (source: DOE XLTM(iv)).
heating was almost discarded in screening to try to get to a three factor test and
the resultant model justifies the decision to keep it, as it is complex across all
three cupcake ratings.
Table 10. Optimal settings for five different cases of what a cook might want.
Case 1 2 3 4 5 6
Moisture &
yes all
Variables Weighting of Outputs yes all equal Appearance Appearance both how-well cooked
equal yes all 6
mean & variance equal
equal
Weighting of yes all equal equally yes both equal
no no
Variances but ½ outputs but½outputs
Preheat Pan? yes no no yes no no
Table 11. Confidence intervals for all-round best cupcake settings (Case 3) (source: DOE
XLTM (iv)).
Table 12. Confidence intervals for the best iced cupcake settings (Case 6) (source: DOE
XLTM (iv)).
of the variety in the different baking settings. From general observations, the
mushroom like muffin top was a result of a full Pan Fill (i.e., 1) and a low tem-
perature oven. The cupcakes with the burnt edges were a result of the longer
cook time and the spherical top to some was often a result of a hot, preheated
cooking temperature. Note also that there is some variance between cupcakes in
each row despite the cupcakes in each row being baked under the same settings.
11. Conclusion
A new post-graduate tertiary course in advanced test and evaluation techniques
(experimental design) provided an opportunity for a more inclusive curriculum
through structured collaborative learning on a fun learning device, followed by
students having an open choice of a system from their work or personal interests
to analyze themselves over the following months with mentoring from their
teacher. This extended the curriculum to students’ “life-worlds” as proposed by
Rasi et al. (2015). Presenting on that choice to their peers and instructor further
empowered students to share their “life-worlds” in ways that leveraged and en-
hanced the social aspect of learning and formed greater trust with the teacher for
the part-time mentoring phase. Because students go on to also share their ana-
lyses with their work colleagues, hobby friends and family, their new knowledge
is reinforced in personal ways entirely consistent with both Constructivist and
Vygotsky educational theory (Udvari-Solner & Thousand, 1996). The research me-
thod used in this case study was direct observation of the students’ enthusiasm
and success, made possible because the teacher was also an experienced educa-
tional researcher and the number of students was small. The work is only offered
as an encouraging example of curricular techniques to try for greater inclusion.
Using the showcased student’s work on cupcake baking, a female electronics
engineer, after the fun collaborative learning, was able to bring her passion for
cooking into the class and then, over the following months, and conducting 139
individual bakings, 753 judgings, and obtain 2259 judge ratings amongst her
work and friends. This enabled her to share her new knowledge of advanced test
techniques in a very personal way, which undoubtedly will have robust and en-
during conceptions which she can use to benefit her future test work. Her exam-
ple was not the only ones, another female engineer shared her passion for toy
slot cars, and another aspiring female researcher brought her commercial busi-
ness knowledge into the learning, helping breakdown difficulties in English-as-a
second-language. This case study has reinforced that if STEM subjects are to ap-
peal to non-traditional sources of students, then such structured fun learning
and open contextualization are key. In this case, a common cooking effort has
been analyzed with advanced test techniques and this should appeal to several
non-traditional STEM markets. The social aspect of learning was not only bene-
ficial for females, several fairly reclusive male students blossomed when bringing
their hobbies into the class and then their classwork to their hobbies.
There are also other educational aspects at work in the new course as show-
cased in this article. The ability to explore complex systems with relatively
easy-to-learn statistical and experimental design packages involving multiple
visual analysis tools is highly effective computer-assisted learning for engineers
and project managers, very analogous to the burgeoning use of finite-element
modelling packages in research and teaching in the 1980’s and 90’s. As such, the
inclusivity of the course is likely to extend to students of lower ability or who are
more visual learners.
This case study in new curriculum for a complex STEM subject found the
student-centred learning of collaboration, computer-based analysis, and an open
student choice of personal research interests, to be highly inclusive in the ways
proposed by the literature reviewed (Tait, 2009; Ashman, 2010; Koppi et al.,
2010), especially for gender (Wistedt, 1998).
References
Aldis, G. K., Sidhu, H. S., & Joiner, K. F. (1999). Trial of Calculus and Maple with Het-
erogeneous Student Groups at the Australian Defence Academy. International Journal
of Computer Algebra in Mathematics Education, 6, 167-190.
Anonymous (2010). The Navy Workforce-Recruiting, Management and Retention. Naval
Forces, 31, 57-59.
Ashman, A. (2010). Modelling Inclusive Practices in Postgraduate Tertiary Education
Courses. International Journal of Inclusive Education, 14, 667-680.
https://doi.org/10.1080/13603111003778429
Australian Good Taste (2016). Red Velvet cupcakes.
http://www.taste.com.au/recipes/26697/red+velvet+cupcakes
Churchill, B., Denny, L., & Jackson, N. (2014). Thank God You’re Here: The Coming
Generation and Their Role in Future Proofing Australia from the Challenge of Popula-
tion Ageing. Australian Journal of Social Issues, 49, 373-392.
https://doi.org/10.1002/j.1839-4655.2014.tb00318.x
Defence Department Australia (2016). Defence White Paper 2016.
http://www.defence.gov.au
Defence Materiel Organisation and Australian National Audit Office (2014). Report No.
14 2014-15: 2013-14 Major Projects Report. Canberra: ANAO.
Fisher, R. A. (1971 [1935]). The Design of Experiments (9th ed.). Macmillan.
Gilmore, M. J. (2015). Continuing to Advance the Science of Test in Operational Test and
Evaluation. ITEA Journal, 36, 92-95.
Henry, K. (2004). An Ageing Population and the Challenge for Australia’s Living Stan-
dards and Public Policy. Growth, 51, 11-21.
Hyde, M., Carpenter, L. R., & Conway, R. N. (2013). Diversity, Inclusion and Engage-
ment. South Melbourne: Oxford University Press.
Hyter, M. C., & Turnock, J. L. (2006). The Power of Inclusion: Unlock the Potential and
Productivity of Your Workforce. John Wiley & Sons.
Johnson, R. T., Hutto, G. T., Simpson, J. R., & Montgomery, D. C. (2012). Designed Ex-
periments for the Defense Community. Quality Engineering, 24, 60-79.
https://doi.org/10.1080/08982112.2012.627288
Joiner, K. F. (1999). Trialing and Evaluating Reform in Calculus Education. Unpublished
Doctoral Dissertation, Perth: Curtin University of Technology.
Joiner, K. F., Kiemele, M., & McAuliffe, M. (2016). Australia’s First Official Use of Design
of Experiments in T&E: User Trials to Select Rifle Enhancements, ITEA Journal, 37,
141-152.
Joiner, K. F., Malone, J., & Haimes, D. (2002). Assessment of Classroom Environments in
Reformed Calculus Education. Learning Environments Research, 5, 51-76.
https://doi.org/10.1023/A:1015635122875
Koppi, T., Sheard, J., Naghdy, F., Edwards, S. L., &Brookes, W. (2010). Towards a Gender
Inclusive Information and Communications Technology Curriculum: A Perspective
from Graduates in the Workforce. Computer Science Education, 20, 265-282.
https://doi.org/10.1080/08993408.2010.527686
Lednicky, E. J., & Silvestrini, R. T. (2013). Quantifying Gains Using the Capability-Based
Test and Evaluation Method. Quality Reliability Engineering International, 29,
139-156. https://doi.org/10.1002/qre.1292
Murphy, T., Leiby, L. D., Glaeser, K., & Freeman, L. (2015). How Scientific Test and
Analysis Techniques can Assist the Chief Developmental Tester. ITEA Journal, 36,
96-101.
O’Loughlin, K., Browning, C., & Kendig, H. (2016). Ageing in Australia: Challenges and
Opportunities. Springer Books.
Rasi, P., Hautakangas, M., & Väyrynen, S. (2015). Designing Culturally Inclusive Affor-
dance Networks into the Curriculum. Teaching in Higher Education, 20, 131-142.
https://doi.org/10.1080/13562517.2014.957268
Reagan, L. A., & Kiemele, M. J. (2008). Design for Six Sigma: The Tool Guide for Practi-
tioners. Colorado Springs: Air Academy Associates, LLC.
Robichaud, M., Dugas, M. J., & Conway, M. (2003). Gender Differences in Worry and
Associated Cognitive-Behavioural Variables. Journal of Anxiety Disorders, 17, 501-516.
Rucker, A. (2014). Improving Statistical Rigor in Defense Test and Evaluation: Use of
Tolerance Intervals in Designed Experiments. Defense AR Journal, 803-824.
Schmidt, S. R., & Launsby, R. G. (2005). Understanding Industrial Designed Experiments.
Colorado: Air Academy Associates, LLC.
Tait, K. (2009). Reflecting on How to Optimize Tertiary Student Learning through the
Use of Work Based Learning within Inclusive Education Courses. International Journal
of Teaching and Learning in Higher Education, 20, 192-197.
Udvari-Solner, A., & Thousand, J. S. (1996) Creating a Responsive Curriculum for Inclu-
sive Schools. Remedial and Special Education, 17, 182-192.
https://doi.org/10.1177/074193259601700307
Wistedt, I. (1998). Assessing Student Learning in Gender Inclusive Tertiary Mathematics
and Physics Education. Evaluation and Programming Planning, 21, 143-153.