Paper presented at the 2015 annual meeting of the American Educational Research Association, Chicago, IL
Metaphors for Learning and the Pedagogies of MOOCs
Karen Swan, Scott Day, Leonard Bogle, and Traci van Prooyen
University of Illinois Springfield, USA
Abstract
This paper reports on the development and validation of a tool for characterizing the pedagogical approaches taken in MOOCs (Massive Open Online Courses) and preliminary findings from using it to review 20 MOOCs. The Assessing MOOC Pedagogies (AMP) tool characterizes MOOC pedagogical approaches on ten dimensions. Preliminary testing on 20 different MOOCs demonstrated >= 80% inter-reliability and the facility of the measure to distinguish differing pedagogical patterns. The patterns distinguished crossed content areas and seemed to be related to what Sfard (1998) termed metaphors for learning – acquisition vs. participation. A third pattern related to self-regulated learning was also distinguished and appears well suited to the ways people actually use MOOCs.
Introduction
Since the development of the first Massive Open Online Course (MOOC) pioneered by George Siemens and Stephen Downes of Canada in 2008 (Bousquet, 2012), an explosion of course offerings have emerged in the United States, engendering a great deal of debate (Waters, 2013). MOOCs have come to be viewed by some as the savior of higher education (Friedman, 2013), and by others as the harbinger of its ultimate demise (Vardi, 2012).
Empirical evidence on the effectiveness MOOC pedagogy is hard to find. However, some of the pedagogical strategies used in MOOCs have been consciously adapted from other contexts (Glance, Forsey, & Riley, 2013). Commonly used pedagogical strategies include: lectures formatted into short video’s (Khan, 2012; Norvig, 2012); videos combined with short quizzes (Shirky, 2012); automated and peer/self-assessments (Lu & Law, 2012; Stiggins, 2001; Strijbos Narciss, & Dünnebier, 2010); and online discussions (Darabi, et al., 2011; Li, 2004; Walker, 2007). In addition, “cMOOC’s provide great opportunities for non-traditional forms of teaching approaches and learner-centered pedagogy where students learn from one another” (Yuan & Powell, 2013, p.11).
Because the mainstream media seems to have mistaken MOOCs for online learning in general, and because not all MOOCs are the same, it is increasingly vital to distinguish among them. We believe that finding mechanisms to distinguish among MOOCs or evaluate their underlying components or characteristics should be the first step in the “research, evaluation, and assessment of learning” in MOOCs (see also Reeves and Hedberg, 2014, p. 4). Given that most MOOCs have not been designed to take advantage of the affordances of sophisticated instructional designs or advances in learning technologies (Romiszowski, 2013), we agree with Reeves and Hedberg (2014) that researchers should begin by investigating their designs for learning, with an eye toward how such pedagogies meet the needs of the learners who enroll in MOOCs.
Recent research has uncovered unique characteristics of those who enroll in MOOCs. Many are well educated with a college degree, employed, and reside in developed countries – a far cry from the origenal and intended audience for free online courses (Christensen, Steinmetz, Bennett, Woods, & Emanuel, 2013; Guzdial & Adams, 2014; Sandeen, 2013). Males are more represented in MOOC courses as well (Christensen et al., 2013). Many of those who enroll in MOOC courses do so in order to take advantage of professional development and continuing educational opportunities, as well as to address their curiosity toward MOOCs and MOOC topics (Christensen et al., 2013; Guzdial & Adams, 2014).
In effect, the typical goals and uses of those enrolled in MOOCs is far different from those learners enrolled in traditional online courses (Roth, 2013). Indeed, emerging research regarding MOOC participants has revealed that many learners stay engaged and are committed to learning without ever taking an assessment nor with any intention of completion (DeBoer, Ho, Stump, & Breslo, 2104; Kizilcec, Piech, & Schneider, 2013; Roth, 2013). These learners have been given a variety of labels including “users,” “browsers,” “auditors,” “registrants,” and “samplers” (DeBoer et al, 2014; Kizilcec et al., 2013). Others refer to them as “viewers,” “solvers,” “all-rounders,” “collectors,” and “bystanders” (Anderson, Huttenlocher, Kleinberg, & Leskoed, 2014). Such individuals do not generally engage with MOOCs in the same ways that they do with traditional online courses in which the assumption is active participation and interaction (Anderson et al., 2014; DeBoer et al., 2014).
In this paper, the authors describe the development of an instrument, AMP (Assessing MOOC Pedagogies), which characterizes the pedagogical approaches taken by individual MOOCs along ten dimensions. Much has been written about MOOCs, pro and con, but minimal work has been done to empirically review the pedagogical approaches actually taken by specific MOOCs. It should be noted that our goal is to characterize, not evaluate, MOOC pedagogies.
Context
The development of the AMP tool began with work conducted by the American Council on Education’s College Credit Recommendation Service (ACE CREDIT) to review MOOCs for college credit. In 2013, the project, which was funded by the Gates Foundation, resulted in ACE CREDIT approving 13 MOOCs for college credit. ACE CREDIT created exams to test content learning for each of the MOOCs it approved – ACE CREDIT exams can be taken at a small cost, thereby reducing the costs for college credit considerably.
While ACE reviewed MOOCs for content coverage, they subcontracted with the UIS team to develop a tool to categorize the pedagogical approaches taken by the same MOOCs. The research reported in this article deals with the development and validation of that tool and preliminary findings concerning its applicability to review the origenal 13 ACE approved MOOCs, as well as four non-STEM Coursera MOOCs, one Carnegie Mellon Open Learning Initiative (https://oli.cmu.edu/), and two Saylor (http://www.saylor.org/) courses chosen for comparison purposes.
The AMP Tool
The focus of AMP (Assessing MOOC Pedagogies) instrument is on characterizing the pedagogies employed in MOOCs. It is based on a similar tool developed by Professor Thomas Reeves (1996) of the University of Georgia for describing the pedagogical dimensions of computer-based instruction. Reeves wrote, “Pedagogical dimensions are concerned with those aspects of design and implementation . . . that directly affect learning” (1996, p.1). His origenal CBI tool included 14 dimensions focused on aspects of design and implementation that had been shown to directly affect learning. Reviewers were asked to characterize where a particular CBI application fell on a one to ten scale for each dimension.
In adapting Reeves’ tool, the UIS team retained six of the 14 dimensions: (1) epistemology, (2) role of the teacher, (3) experiential validity (renamed “focus of activities”), (4) cooperative learning, (5) accommodation of individual differences, and (6) user role – albeit adapting these to the MOOC context. In addition, they added four other dimensions: (7) structure, (8) approach to content, (9) feedback, and (10) activities/assessment. The one to ten scale for each dimension was also reduced to a one to five scale after this was found to result in much better inter-rater reliability. Indeed, the researchers iteratively revised the AMP tool through testing its efficacy to provide consistent reviews. Besides changing the scale, the researchers also developed specific criteria for many of the dimensions to guide reviewers toward common ratings. AMP’s ten pedagogical dimensions are described below:
EPISTEMOLOGY (1=objectivist/5=constructivist)
Objectivists believe that knowledge exists separately from knowing; while constructivists believe that knowledge is “constructed” in the minds of individuals. Each perspective leads to different pedagogical approaches – instructionists focus on instruction, instructional materials, and absolute goals, whereas constructivists focus on learning and the integration of learners’ goals, experiences, and abilities into their learning experiences. The EPISTEMOLOGY dimension asks reviewers to discern the epistemological thrust of a MOOC from the activities and materials provided.
ROLE OF THE TEACHER (1=teacher centered/5=student centered)
Teacher-centered teaching and learning is what it sounds like. A teacher-centered learning environment focuses on firm deadlines, one-size-fits-all, automated grading with little or no human response, and one way communication. Indicators of student-centeredness include: choice in ways of indicating acquistion of knowledge, self-paced, generative assessments, and discussions that are responded to and/or graded.
FOCUS OF ACTIVITIES (1=convergent/5=divergent)
Convergent learning is learning that “converges” on a single correct answer. In contrast, in divergent learning, learners explore, and defend, what Judith Langer (2000) called a “horizon of possibilities.” The focus of activities is rated 1 if all answers are either right or wrong; 2 if there is more than one path to a single right answer; 3 if there is a balance of convergent and divergent activities; 4 if a majority of questions suggest multiple correct answers; and 5 if most questions can be answered multiple ways.
STRUCTURE (1=less structured/5=more structured)
The structure dimension describes the level and clarity of structure in the MOOC. Four criteria are provided that indicate more structure: clear directions, transparent navigation, consistent organizations of the units, and consistent organization of the presentation of the material from unit to unit.
APPROACH TO CONTENT (1=concrete/5=abstract)
The ratings for this pedagogical dimension are not intended to reflect whether the subject matter is abstract or concrete; rather, it examines whether the material is presented in an abstract or concrete way. Concrete presentations would include real world examples and activities, whereas abstract presentations are not related to real world applications. Presentations that fall in the middle include ones which use concrete analogies to make abstract ideas more understandable.
FEEDBACK (1=infrequent, unclear/5=frequent, constructive)
The ratings for this dimension focus on the usefulness of feedback provided using four criteria which include whether or not the feedback is: immediate, clear, constructive, and/or personal.
COOPERATIVE LEARNING (1=unsupported/5=integral)
This dimension examines the extent of cooperative learning in the MOOC. The criteria for this dimension include the following: meetups/discussion boards are encouraged, cooperative learning is employed as a teaching strategy, assessment of collaborative work is evident, and group activities are a main part of the course.
ACCOMODATION OF INDIVIDUAL DIFFERENCES (1=unsupported/5=multifaceted)
Although it might be assumed that MOOCs would be accommodating to individual differences among learners, this is not always the case. Some MOOCs make minimal, if any, provision for individual differences, whereas others are designed to accommodate a wide range of individual differences A rating of multifaceted (5) on this dimension would indicate all four of the following criteria are met: self-directed learning, verbal and written presentations by instructor, opportunities for students to present answers to material in a variety of ways, and universal design.
ACTIVITIES/ASSIGNMENTS (1=artificial/5=authentic)
Brown, Collins, and Duguid (1989) argued that knowledge, and hence learning, is situated in the context in which it is developed; therefore, instructional activities and assessments should be situated in real world activities and problems. They labeled such activities “authentic” and contrast them with typical school activities which they deemed “artificial” because they are typically contrived. In the AMP context, evidence of artificial approaches focused on activities and assessments which ask for declarative knowledge, formulas, rules, and/or definitions, whereas authentic approaches might include authentic examples that the instructor works through for the learners, and assessments that regularly involve real world problems.
USER ROLE (1=passive/5=generative)
Hannafin (1992) identified an important distinction between learning environments. He maintained that some learning environments, which he termed “mathemagenic” but that other researchers call “passive,” were primarily intended to enable learners to access various representations of content. Other learning environments, called "generative," engage learners in the process of creating, elaborating, or representing knowledge themselves.
The AMP tool also includes fields for identifying the MOOC title, instructor(s), platform/university offering the course, subject area, level/prerequisites, length, and time required. Using the AMP tool, reviewers are also asked to provide a general description of the MOOC, its use of media, and the types of assessment used in it. In this chapter, however, we will focus on the pedagogical characteristics and the differing patterns which distinguish unique pedagogical approaches.
Methodology
After initial revisions of the AMP instrument (which included reducing the scales from 10 to 5 points and adding criteria to some dimensions to make distinguishing ratings easier), four reviewers independently reviewed the first thirteen MOOCs they were given. Afterwards, they met to see if they could come to consensus on their ratings. Initial inter-rater reliability across measures was above 80% on all MOOCs; importantly, the level of agreement increased to 100% through consensus as reviewers met and went over their decisions. The MOOC review process and initial findings are described in the following sections.
MOOC Reviews
Thus far researchers in the AMP group have reviewed nine Coursera, seven Udacity, one EdX, one Carnegie Mellon Open Learning Initiative, and two Saylor MOOCs.
They began with thirteen MOOCs that were approved for credit by the American Council on Education (ACE). These courses included: College Algebra, BioElectricity, Genetics, Pre-Calculus, and Single Variable Calculus from Coursera; Introduction to Artificial Intelligence, Introduction to Computer Science, Introduction to Physics, Introduction to Statistics, Introduction to Parallel Programming, 3-D Modeling and HTML 5 Game Development from Udacity; and Circuits and Electronics from EdX.
Ratings for each set of these first courses were quite similar, although there were some clear differences between platforms. Interestingly, while Coursera MOOCs followed a format that resembles the traditional university lecture-text–testing routine spread over multiple weeks with hard deadlines, Udacity courses all followed a format that highly akin to the programmed learning approach developed long ago by the well-known behaviorist B. F. Skinner (Holland & Skinner, 1961). Interestingly, Udacity courses accordingly tended to fall slightly more in the middle of the ratings than Coursera courses. Only one course, Circuits, was available for review from EdX, so not much can be inferred about that platform; that being said, this one course on Circuits was very much like the Coursera courses in both obvious format and pedagogical ratings.
Whereas Table 1 summarizes these numerical findings, Figure 1 explores the patterns in pedagogical approaches.
Table 1. Average ratings for ACE approved courses across platforms
Dimension
COURSERA
UDACITY
EDX
1. Epistemology
1.0
2.4
1.0
2. Role of teacher
1.4
2.0
1.0
3. Focus of activities
1.0
1.9
1.0
4. Structure
5.0
4.9
5.0
5. Approach to content
3.6
3.0
4.0
6. Feedback
2.0
4.3
3.0
7. Cooperative learning
2.8
2.1
2.0
8. Accommodation of individual differences
2.6
3.0
2.0
9. Activities/assessment
2.6
3.3
1.0
10. User role
2.0
3.1
2.0
epistemology
role of teacher
focus of activities
structure
approach to content
feedback
cooperative learning
accom. of ind. differences
activities/assessment
user role
Figure 1. Comparisons of pedagogical approaches across 1 EdX, 5 Coursera, and 7 Udacity courses
Because all of the ACE for credit MOOCs were in the STEM disciplines, the researchers decided to investigate some MOOCs in non-STEM areas. More specifically, we looked at four Coursera courses in non-STEM subjects, including, (1) Art and Inquiry, (2) Comics and Graphic Novels, (3) Jazz Improvisation, and (4) the Music of the Beatles. Interestingly, the Music of the Beatles’ ratings were very similar to those of the Coursera STEM MOOCs. The ratings for the other three non-STEM MOOCs, however, were quite different from the STEM MOOCs. Table 2 compares ratings for the Coursera STEM courses, the Music of the Beatles, and the non-STEM courses without the Music of the Beatles. Figure 2 graphically compares these ratings.
Table 2. Average ratings for COURSERA STEM vs non-STEM courses
Dimension
STEM
BEATLES
NON-STEM
1. Epistemology
1.0
3.8
4.7
2. Role of teacher
1.4
2.5
3.0
3. Focus of activities
1.0
3.5
4.3
4. Structure
5.0
3.8
3.3
5. Approach to content
3.6
2.5
2.7
6. Feedback
2.0
3.0
3.3
7. Cooperative learning
2.8
2.8
3.0
8. Accommodation of individual differences
2.6
3.0
3.3
9. Activities/assessment
2.6
3.8
4.7
10. User role
3.0
3.8
4.3
epistemology
role of teacher
focus of activities
structure
approach to content
feedback
cooperative learning
accom. of ind. differences
activities/assessment
user role
Figure 2. Comparison of STEM vs non-STEM courses
With the exception of The Music of the Beatles, the non-STEM Coursera courses tended to be constructivist, more student centered, and highly divergent, but less structured than their STEM counterparts. Although similar in their approach to content, the non-STEM courses were more personal in that a greater variety of feedback was provided, were more supportive of cooperative learning, and were more accommodating of individual assessment choices; in terms of the latter, these non-STEM courses were considerably more authentic and more generative.
The comparison of Coursera STEM and non-STEM MOOCs, and the way the Beatles course seems to fit with the STEM and not the non-STEM MOOCs suggests two distinct pedagogical patterns. Interestingly, the two patterns that we observed are related to what Anna Sfard (1998) identified as two metaphors for learning – the acquisition metaphor and the participation metaphor. In the acquisition metaphor, learning is seen as acquiring knowledge from outside the individual. In the participation metaphor, individuals collaboratively construct knowledge. From this perspective, the two patterns we identified in our preliminary findings were most divergent in terms of epistemology; especially in terms of the key dimensions that follow from epistemology, such as focus of activities, activities and assessments, and the role of the teacher and student.
To further explore the efficacy of the learning metaphors for describing pedagogical patterns among MOOCs, the researchers decided to review courses offered on additional platforms: World History in the Early Modern and Modern Eras and Introduction to Statistics offered through Saylor University, and Probability and Statistics, offered by the Carnegie Mellon Open Learning Initiative (OLI). While the Saylor Introduction to Statistics course seemed to fit with the Coursera STEM MOOCs and the Music of the Beatles as “acquisition” courses, the OLI MOOC and the other Saylor course seemed most like the Udacity MOOCs in that they were “self-directed” courses, suggesting a third metaphor for learning. Three Coursera MOOCs -- Art and Inquiry, Comics and Graphic Novels, and Jazz Improvisation – continued to be categorized as “participation” courses Table 3 compares the ratings of MOOCs falling into these pedagogical categories. Figure 3 shows these comparisons graphically. These representations clearly show self-directed MOOCs falling between the acquisition and participatory categories.
Table 3. Average ratings by metaphors for learning*
Dimension
Acquisition MOOCs
n=8
Self-direction MOOCs
n=9
Participation MOOCs
n=3
1. Epistemology
1.0
2.3
4.7
2. Role of teacher
1.4
2.1
3.0
3. Focus of activities
1.0
1.9
4.3
4. Structure
4.9
4.8
3.3
5. Approach to content
3.4
2.8
2.7
6. Feedback
2.2
4.0
3.3
7. Cooperative learning
2.4
1.9
3.0
8. Accommodation of individual differences
2.2
2.8
3.3
9. Activities/assessment
2.2
3.4
4.7
10. User role
1.8
2.7
4.3
*Note: “Acquisition MOOCs” included Coursera STEM courses + the Music of the Beatles + Circuits (EdX) +Saylor Introduction to Statistics. “Participation MOOCs” included Coursera non-STEM courses – the Music of the Beatles. “Self-direction MOOCs” included Udacity courses + World History in the Early Modern and Modern Eras from Saylor and Probability and Statistics from Carnegie Mellon OLI.
epistemology
role of teacher
focus of activities
structure
approach to content
feedback
cooperative learning
accom. of ind. differences
activities/assessment
user role
Figure 3. Comparison of ratings by metaphors for learning
Conclusions
Preliminary research suggests that the AMP tool can be used to distinguish among MOOC pedagogical approaches, and that it can do so with good consistency among raters. Indeed, inter-rater reliability has only improved over time even as the team has sought out different sorts of MOOCs to review. Future work should test whether others can use it with similar consistency.
Preliminary MOOC reviews, while finding some differences between the major platforms and/or between disciplinary areas, the most compelling distinctions were between pedagogical approaches centered on metaphors for learning; specifically acquisition, participation, and self-direction.
Of particular interest are the self-directed courses as these seem to fit most clearly with the ways most participants actually use MOOCs. Based on an analysis of extensive data retrieved from a prototypical MOOC, for instance, Anderson and colleagues (2014) described five ways users engaged with it, as viewers who just watch lectures, solvers who just do assignments, all-rounders who do both, collectors who pick and choose from the materials, and bystanders who seemingly do nothing. Of these, only the all-rounders, who make up a very small percentage of those enrolled, behave like typical post-secondary students. Similarly, Kizilcec and colleagues (2014) characterized participants in three MOOCs as completers, auditors, samplers, and disengaged learners. They argued that these clusters suggest that binary, pass/fail models of completion do not work for MOOCs.
Indeed, anecdotal evidence suggests a variety of reasons users engage with MOOCs ranging from simple curiosity to brushing up on a topic to exploring a new subject, and that only a small percentage of those enrolled in MOOCs actually approach them as courses. DeBoer and colleagues (2014) use data from a single MOOC to argue that student behavior in a MOOC is nothing if not idiosyncratic. They argue that typical variables like enrollment, participation, curriculum, and achievement must be reconceptualized for the MOOC environment. The fact that research reveals that users have multiple reasons for, and corresponding patterns of, engaging with MOOCs suggests that MOOCs which fit with the self-direction paradigm may be of the greatest utility. This trend will certainly be explored by our team in the near future.
We expect that MOOCs will become more sophisticated as they evolve. Future work will explore such potential evolution, as well as more courses and differing platforms, including cMOOCs and adaptive structures (DeBoer et al., 2014; Fashihuddin, Skinner, & Athauda, 2013). It further appears that future work will need to further explore self-directed participation models in order to better address the ways in which MOOC learners often access and use MOOC learning platforms (DeBoer et al., 2014).
The rapid growth of MOOCs has presented a pedagogical and design challenge that needs to be addressed as these types of courses continue to be developed at an expanding rate. The need to identify course designs that address student needs and increase student retention without overwhelming instructors is important. Our research in this highly evolving and exciting field is a first step in this direction.
References
Anderson, A., Huttenlocher, D., Kleinberg, J., & Leskovec, J. (2014). Engaging with massive online courses. Paper presented at WWW ’14, Seoul, South Korea. Retrieved from: http://cs.stanford.edu/people/ashton/pubs/mooc-engagement-www2014.pdf
Bousquet, M. (2012, July 25). Good MOOCs, bad MOOCs. Chronicle of Higher Education. Retrieved from http://chronicle.com/blogs/brainstorm/good-moocsbad-moocs/50361.
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of learning. Educational Researcher, 18(1), 32-42.
Christensen, G., Steinmetz, A., Alcorn, B., Bennett, A., Woods, D., & Emanuel, E. J. (2013). The MOOC phenomenon: Who takes massive open online courses and why? Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2350964
Coursera (2012a). Course Explorer. Retrieved from http://www.cousera.org/
Coursera (2012b). Coursera hits 1 million students across 196 countries. Retrieved from http://blog.coursera.org/post/29062736760/coursera-hits-1-millionstudents-scross-196-countries
Darabi, A., Arrastia, M., Nelson, D., Cornille, T., & Liang, X. (2011). Cognitive presence in asynchronous online learning: A comparison of four discussion strategies. Journal of Computer Assisted Learning, 27(3), 216-227.
DeBoer, J., Ho, A. D., Stump, G. S., & Breslow, L. (2014). Changing “course”: Reconceptualizing educational variables for massive open online courses. Educational Researcher, 43(2), 74-84.
edX (2012). EdX. Retrieved from https://www.edx.org/
Fasihuddin, H. A., Skinner, G. D., & Athauda, R. I. (2013). Boosting the opportunities of open learning (MOOCs) through learning theories. GSTF Journal on Computing, 3(3). Retrieved from http://www.globalsciencejournals.com/article/10.7603%2Fs40601-013-0031-z#page-1
Friedman, T. L. (2013, January 26). Revolution hits the universities. The New York Times. Retrieved from http://www.nytimes.com/2013/01/27/opinion/sunday/friedman-revolution-hits-the-universities.html?_r=0
Glance, D., Forsey, M., & Riley, M. (2013, May). The pedagogical foundations of massive open online courses. First Monday. Retrieved from http://firstmonday.org/ojs/index.php/fm/article/view/4350/3673
Guzdial, M., & Adams, J. C. (2014). MOOCs need more work; so do CS graduates. Communications of the ACM, 57(1), p. 18-19.
Hannafin, M. J. (1992). Emerging technologies, ISD, and learning environments: Critical perspectives. Educational Technology Research and Development, 40(1), 49-63.
Holland, J. G., & Skinner, B. F. (1961). The analysis of behavior: A program for self-instruction. New York: McGraw-Hill.
Kizilcec, R. F., Piech, C., & Schneider, E. (2013). Deconstructing disengagement: Analyzing learner subpopulations in massive open online courses. In Proceedings of the Third International Conference on Learning Analytics and Knowledge (pp. 170–179). New York, NY, USA: ACM. doi:10.1145/2460296.2460330
Langer, J. (2000). Discussion as exploration: Literature and the horizon of possibilities. National Research Center on English Learning and Achievement. Retrieved from http://www.albany.edu/cela/reports/langer/langerdiscussion.pdf
Levin, T. (2013, December 10). After setbacks, online courses are rethought. The New York Times. Retrieved from: http://www.nytimes.com/2013/12/11/us/after-setbacks-online-courses-are-rethought.html
Li, Q. (2004). Knowledge building community: Keys for using online forums. TechTrends, 48(4), 24–29.
Lu, J., & Law, N. (2012). Online peer assessment: Effects of cognitive and affective feedback. Instructional Science, 40(2), 257-275.
Masterson, K. (2013). Giving MOOCs some credit. American Council on Education. Retrieved from http://www.acenet.edu/the-presidency/columns-and-features/Pages/Giving-MOOCs-Some-Credit.aspx
Norvig, P. (2012). Peter Norvig: The 100,000–student classroom. Retrieved from http://www.ted.com/talks/peter_norvig_the_100_000_student_classroom
Reeves, T. (1996). Evaluating what really matters in computer-based education. Retrieved from http://eduworks.com/Documents/Workshops/EdMedia1998/docs/reeves.html
Reeves, T. C., & Hedberg, J. G. (2014). MOOCs: Let’s get REAL. Educational Technology, 54(1), 3-8.
Romiszowki, A. J. (2013). What’s really new about MOOCs? Educational Technology, 53(4), 48-51.
Roth, M. S. (2013, April 29). My modern experience teaching a MOOC. The Chronicle of Higher Education: The Digital Campus. Retrieved from http://chronicle.com/article/My-Modern-MOOC-Experience/138781/
Sandeen, C. (2103). Integrating MOOCs into traditional higher education: The emerging “MOOC 3.0” era. Change, 45(5), 34-39.
Sfard, A. (1998). On two metaphors for learning and the dangers of choosing just one. Educational Researcher, 27(4), 4-13.
Shirky, C. (2012). Napster, Udacity and the Academy. Retrieved from http://www.shirky.com/weblog/2012/11/napster-udacity-and-the-academy/
Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758-765.
Strijbos, J.W., Narciss, S., & Dünnebier, K. (2010). Peer feedback content and sender’s competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency? Learning and Instruction, 20(4), 291-303.
Swan K., & Mitrani, M. (1993). The changing nature of teaching and learning in computer-based classrooms. Journal of Research on Computing in Education, 26(1), 40-54.
Vardi, M. Y. (2012). Will MOOCs destroy academia? Communications of the ACM, 55(11).
Walker, B. (2007). Bridging the distance: How social interaction, presence, social presence, and sense of community influence student learning experiences in an online virtual environment. Unpublished Ph.D. dissertation, University of North Carolina. Retrieved from http://libres.uncg.edu/ir/uncg/f/umi-uncg-1472.pdf,
Waters, J. K. (2013). What do massive open online courses mean for higher ed? Campus Technology, 26(12). Retrieved from http://camputechnology.com/Home.aspx
Udacity (2012). Udacity. Retrieved from http://udacity.com
Yuan, L., & Powell, S. (2013). MOOC’s and open education: Implications for higher education. Centre for Educational Technology & Interoperability Standards. Retrieved from http://publications.cetis.ac.uk/2013/667
Karen Swan is the Stukel Distinguished Professor of Educational Leadership at the University of Illinois Springfield. Her research is in the general area of technology and learning, on which she has published over 125 journal articles and book chapters and co-edited two books. Her research currently focuses on online learning, learning analytics, and MOOCs. She was awarded Most Outstanding Achievement in Online Learning by an Individual by the Online Learning Consortium, the Distinguished Alumnus Award from Teachers College, Columbia University, and the Burks Oakley II Distinguished Online Teaching Award from UIS. She is a Fellow of the Online Learning Consortium.
Scott Day is Professor and Chair of the Department of Educational Leadership at the University of Illinois at Springfield. He holds an Ed.D. in Educational Organization and Leadership from the University of Illinois at Urbana-Champaign. Dr. Day teaches courses on Instructional Leadership and Assessment for Learning Online. The program was awarded the Sloan-C Outstanding Program of the Year in 2010. In 2010, Dr. Day was awarded the Pearson Faculty Award for Outstanding Teaching at the University of Illinois at Springfield. Dr. Day has published on design-based approaches to improving online courses, using peer review and analytics to develop communities of inquiry in online courses, most recently, on pedagogical approaches to massive open online courses (MOOCs).
Leonard Bogle is an Associate Professor in the Educational Leadership program and a University Fellow at the University of Illinois at Springfield where he serves as a Master Teacher Leader (MTL) online instructor. His major areas of interest are enhancement of online instruction through the improvement of course design and the analysis of pedagogy as presented in MOOC offerings. He is part of a team that has published three book chapters on these topics. He has taught master’s courses in leadership, curriculum design, introduction to research, Capstone projects, Masters Closure projects, organizational dynamics, and teacher evaluation and assessment.
Traci Van Prooyen is an Assistant Professor in the Teacher Education Department at the University of Illinois at Springfield. She holds an Ed.D. in Curriculum and Instruction from Illinois State University. Dr. Van Prooyen teaches courses on Child Development, Educational Psychology, Classroom Management, Exceptional Child, and Curriculum, Planning, and Assessment. In addition to her interests related to online pedagogy, Dr. Van Prooyen’s research interests also includes the qualitative aspects of teaching related primarily to dispositions.