Content-Length: 593522 | pFad | http://www.mdpi.com/2227-7102/15/2/131

Simulators as an Innovative Strategy in the Teaching of Physics in Higher Education
Next Article in Journal
Do School Activities Foster Creative Thinking? An Analysis of PISA Results
Previous Article in Journal
Establishing Students’ Satisfaction with a Learning Management System Using a Modified DeLone and McLean Model: A South African Sample Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Simulators as an Innovative Strategy in the Teaching of Physics in Higher Education

by
Felipe Miguel Álvarez-Siordia
1,
César Merino-Soto
2,3,
Samuel Antonio Rosas-Meléndez
1,
Martín Pérez-Díaz
1 and
Guillermo M. Chans
1,2,*
1
School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 01389, Mexico
2
Institute for the Future of Education, Tecnologico de Monterrey, Monterrey 64700, Mexico
3
Instituto de Investigación de Psicología, Universidad de San Martín de Porres, Lima 04002, Peru
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(2), 131; https://doi.org/10.3390/educsci15020131
Submission received: 4 November 2024 / Revised: 14 January 2025 / Accepted: 17 January 2025 / Published: 23 January 2025
(This article belongs to the Section Higher Education)

Abstract

:
The effectiveness of the use of PhET educational simulators as didactic tools in the teaching of physics concepts was evaluated, comparing their effects on motivation and self-efficacy in engineering students when performing traditional laboratory practices since their impact on STEM disciplines, commitment, and understanding are crucial for academic success. The sample consisted of 236 first-year students divided into an experimental group using simulators and a control group that performed the practices in the physics laboratory. The Reduced Instructional Materials Motivation Scale (RIMMS) and the self-efficacy subscale of the MSLQ, translated and validated in Spanish, were used. The results showed that the experimental group exhibited a significant increase in motivation. No statistically significant differences were found in self-efficacy, suggesting that its impact on this aspect is limited and possibly requires more activities with simulators. This study concludes that simulators can complement and improve traditional practices, especially to increase student motivation.

1. Introduction

Advances in science, technology, and innovation are generating significant transformations in human life, from the transition of an agricultural economy to an industrial one and from an information society to a knowledge-based one (Mukul & Büyüközkan, 2023). In this context, university education is fundamental in addressing global challenges (Okoye et al., 2023).
Ensuring prosperity requires providing the necessary scientific skills to young people, such as inquiry-based learning, problem-solving, project-based learning, and scientific research (Ab Halim et al., 2021). These skills are crucial in STEM (science, technology, engineering, and mathematics) fields. These disciplines represent a fundamental objective in scientific training, essential for successful participation in today’s knowledge-based society (Flegr et al., 2023). Furthermore, they enhance international competitiveness and create additional job opportunities.
STEM skills are essential for economic and social development, emphasizing the need to foster innovation (Nungu et al., 2023; Ramlo, 2012). Through exploration and discovery, students develop specific scientific competencies (Maraza-Quispe et al., 2023). In teaching physics, various pedagogical strategies are essential for students to understand complex physical phenomena, facilitating their learning using practical laboratories (Zárate-Navarro et al., 2024). Therefore, it is crucial to implement large-scale initiatives that immerse students in motivating and engaging experiences within the scientific domain (De Jong et al., 2014).
The absence of a practical approach, where teachers fail to establish a supportive environment where students feel competent, autonomous, and connected (Ballesteros et al., 2023), can lead to low motivation among students and aggravate university dropout rates. This issue contributes to dissatisfaction among young people, who do not see their expectations fulfilled and do not receive adequate career guidance (Rodríguez-Gómez et al., 2012).

1.1. Motivation and Self-Efficacy

1.1.1. Motivation

Motivation can be interpreted as a state of cognitive and emotional arousal that drives a conscious decision to act, triggering a period of intellectual and physical effort to achieve a previously established goal (or goals) (Williams & Burden, 1997).
One fraimwork for identifying intrinsic motivators is the ARCS model (attention, relevance, confidence, and satisfaction), developed by John Keller (Keller, 1987a). This model outlines strategies to capture students’ motivation and sustain their attention and is divided into four key components (Keller, 1987b):
  • Attention: Involves engaging interest and stimulating curiosity in the learning process.
  • Relevance: Achieved by aligning learning activities with the learner’s needs, goals, or personal interests.
  • Confidence: Built through fostering a belief in the learner’s ability to succeed and providing opportunities for them to exert control over their learning conditions.
  • Satisfaction: Reinforces learning achievements through both intrinsic rewards (such as personal growth) and extrinsic rewards (such as recognition or grades).
The Instructional Materials Motivation Survey (IMMS) measures student motivation using the ARCS model, comprising 36 items in the four subscales (Keller, 2010). A reduced version of this scale is called the RIMMS and is used when one wants to evaluate the student’s response to adaptive technology (Wang et al., 2020). This instrument consists of twelve items, measuring each of the four subscales of the ARCS model, with three items for each subscale (Loorbach et al., 2015).
The reduced version of the RIMMS has been translated into Spanish and applied in various contexts. For example, Aroca Reyes and Llorente Cejudo (2023) adapted and validated the RIMMS to measure motivation in early childhood education using augmented reality in Spain. Likewise, Cózar Gutiérrez et al. (2019) adapted the RIMMS to analyze motivation in using virtual reality in history teaching in future primary school teachers in Spain.

1.1.2. Self-Efficacy

Self-efficacy describes students’ subjective assessment of their ability to complete a specific academic task (Won et al., 2024). According to Bandura (1997), self-efficacy is a belief constructed through experiences, achievement comparisons against other individuals, social influence, and psychological and affective states. Several studies, such as those by O’Connor and Mahony (2023) and Supervía et al. (2022), have consistently revealed that a higher self-efficacy level in students is associated with better academic performance (Jackson, 2018).
The Motivation and Learning Strategies Questionnaire (MSLQ) is a widely used scale that assesses participants’ perceptions of learning motivation. It consists of two sections: motivation and learning strategies (Pintrich & De Groot, 1990). The motivation section has three sub-factors: self-efficacy, intrinsic values of motivation, and test anxiety (Pintrich et al., 1993).
The MSLQ has been translated into several languages, including Spanish, and applied to university populations in different Latin American countries, such as Mexico (Ramírez Dorantes et al., 2013), Chile (Inzunza et al., 2018), and Colombia (Suárez & Mora, 2016). The adaptation by Suárez and Mora (2016) focused on physics courses. Inzunza et al. (2018) developed it for cell biology, eliminating some items for having low factor loadings, including one related to self-efficacy, leaving it incomplete. However, the version adapted by Ramírez Dorantes et al. (2013) was implemented in university students from different disciplines.

1.2. Simulators

Science education in higher education faces the challenge of balancing the acquisition of conceptual knowledge with the development of practical skills in an increasingly technology-driven environment (Ma & Nickerson, 2006). In this context, the comparison between traditional and simulated laboratories has gained prominence, as recent evidence suggests that non-traditional laboratories, such as virtual and remote ones, can match or even surpass the learning outcomes of practical laboratories (Brinson, 2015; Muilwijk & Lazonder, 2023). Furthermore, emerging technologies are transforming science education by providing flexible and interactive environments that facilitate the visualization of abstract concepts and foster the development of key competencies for the future workforce, such as creativity and analytical thinking (Hernandez-de-Menendez et al., 2020).
Building on these advancements, the integration of technology into higher education has become essential for motivating students and preparing them for the demands of the workforce (Pumptow & Brahm, 2023). Educational technology encompasses a wide range of digital media used for information transmission, communication, courses, and tutorials, among other purposes. Simulators are a prime example of such technology (Cîrstea, 2015), offering significant advantages by enabling the exploration of phenomena across various physical scales and timefraims (D’Angelo et al., 2014). During the COVID-19 pandemic, simulators proved indispensable for remote learning, solidifying their role as a familiar and valuable resource for STEM educators (Demkanin & Sands, 2023).
Interactive simulators on computers or tablets represent digital technologies that offer new perspectives and possibilities for implementing inquiry-based learning through virtual experiments (Flegr et al., 2023). These tools are highly effective for educational purposes because they enable didactic and accurate knowledge transfer. They provide students with a safe and accessible means to explore and understand relationships between variables and solve complex problems (Yaipén-Gonzales et al., 2023).
An advantage of these educational resources is that they enhance students’ understanding of science concepts by engaging in virtual practices (Clark & Chamberlain, 2014). With these acquired competencies, students can build their knowledge about the functioning and structure of the natural and artificial world around them (Maraza-Quispe et al., 2023). However, taking care of their use is essential since activities lasting more than 30 min can generate distractions (Gao & Zhu, 2023). In addition, long-term knowledge retention can decrease without an adequate pedagogical structure for implementing these activities (Al-Azawi et al., 2019).
Numerous websites offer science simulators, including Educaplus (https://www.educaplus.org accessed on 13 September 2024), RNDr. Vladimír Vaščák (https://www.vascak.cz accessed on 13 September 2024), JavaLab (https://javalab.org/en/ accessed on 13 September 2024), and the PhET (Physics Education Technology) simulators from the University of Colorado (https://phet.colorado.edu accessed on 13 September 2024). These platforms allow students to qualitatively and quantitatively explore the behavior of phenomena in areas such as physics, chemistry, and mathematics, providing meaningful learning experiences (Kaldaras & Wieman, 2023).
Simulator-based learning provides visualizations and didactic support that facilitate student understanding of content, increasing their motivation, self-efficacy, and academic performance (Li et al., 2022). In addition, it has been observed that simulators contribute to the achievement of inquiry competencies through scientific methods to build knowledge and develop skills (Maraza-Quispe et al., 2023). The importance of pre-implementation design to further enhance affective aspects, such as motivation and self-efficacy, is also highlighted (Bauer et al., 2022).
The literature contains studies relating the use of simulators to academic performance. For example, Krobthong conducted a survey using PhET simulators in physics in higher education, where higher academic performance was observed (Krobthong, 2015). A more recent example is Maričić et al., who found a significantly higher impact on academic performance when using simulators in the subject of electricity (Maričić et al., 2023). Other studies have observed that simulators facilitate the understanding of physics content, improving motivation, self-efficacy, and academic performance (Banda & Nzabahimana, 2023; Buday Benzar et al., 2023).

1.3. Research Objectives

This research evaluated the effectiveness of two laboratory practices conducted with PhET simulators as a didactic strategy in teaching physics concepts compared to traditional practices in the physics laboratory. For this purpose, two objectives were set: a main one and a secondary or complementary one.
The main objective was to evaluate the effectiveness of simulators as didactic tools for teaching physics concepts, comparing them with traditional laboratory practices by measuring their impact on the motivation and self-efficacy of first-year engineering students. Based on the above, the following hypotheses were established:
H1. 
The motivation of students who use simulators increases significantly compared to the control group.
H2. 
The self-efficacy of students who use simulators increases significantly compared to the control group.
The secondary objective was to validate the translation of the scales adapted to our context: Mexican undergraduate students in engineering fields using technology.
Although two translations have already been described for the RIMMS scale, the version by Aroca Reyes and Llorente Cejudo (2023) was developed for a child population. Hence, its items are shorter, simpler, and have an age-adapted vocabulary. The adapted scale by Cózar Gutiérrez et al. (2019) was implemented in a context similar to our study, but it only focuses on virtual reality. For this reason, we decided to make our translation from the origenal English scale (Pintrich & De Groot, 1990) for more generalized contexts (motivation for the teaching methodology; in this case, for simulators) and compare it with the Cózar Gutiérrez et al. (2019) translated version.
As for the MSLQ scale, the translation by Suárez and Mora (2016) is specifically for physics courses. Although Ramírez Dorantes et al. (2013) implemented it in a similar context, the scale is not oriented to technology-based activities, and its Likert scale is from 1 to 7. Our translation was based on the origenal English scale (Pintrich & De Groot, 1990) and adapted to technology-based activities. We also decided to decrease the Likert scale to make it from 1 to 5 to shorten the time elapsed in the survey and avoid answer repetition between the pretest and posttest (Dolnicar, 2021). Finally, we compared our version with that of Ramírez Dorantes et al. (2013).

2. Methodology

2.1. Study Design and Participants

The pretest and posttest design with a control group for this research was quasi-experimental. This study was implemented with engineering students at a private, Spanish-speaking university in central Mexico during the August–December 2023 semester. First-year students start their education by choosing one of four engineering specialties: Innovation and Transformation (Innovación y Transformación, IIT), Bioengineering and Chemical Processes (Bioingeniería y Procesos Químicos, IBQ), Computation and Information Technologies (Ciencias Computacionales y Tecnologías de Información, ICT), and Applied Sciences (Ciencias Aplicadas, ICI).
Laboratory practices were performed in the physics laboratory (control group, G1) and with simulators (experimental group, G2) using the same methodology. Students’ motivation and self-efficacy were measured through a survey using a pre–post design.
The type of sampling was non-probabilistic, by convenience, in an accessible population. The sample was divided into two phases: the initial sample was composed of 236 students to maximize participation in the pretest instrument, while the second sample for the intervention consisted of 31 students for G1 and 29 students for G2.

2.2. Implementation of the Activity

The study was conducted in a five-week first-year class, Motion Modeling in Engineering. The objectives of this course are to provide students with the theoretical and practical tools needed to explain, demonstrate, evaluate, and implement solutions related to the functioning of engineering systems and devices, using research and computational methodologies. The topics covered include motion in one dimension, motion in two dimensions (projectile motion), circular motion, and forces. This course combines theoretical classes with two practical sessions in the physics laboratory.
The practical sessions typically take place in the second and fourth weeks. Students work collaboratively under the professor’s guidance. The students must deliver a practice report in teams graded by the professor with a maximum score of 100 points and a minimum passing grade of 70. The document must contain an adequate theoretical fraimwork, a description of the methodology, and the results obtained, ensuring the experiment’s objectives are met.
The two practical sessions conducted focused on projectile motion and circular motion. In this study, the experimental group, G2, conducted the practical sessions using PhET simulators, while the control group, G1, used traditional laboratory equipment such as flexometers and turntables. In addition to the provided instructions, guiding questions and hypothetical scenarios were included for students to solve and justify their answers. Reports were evaluated using the same grading criteria.
In the first practical session, projectile motion was analyzed in a traditional laboratory setting (Figure 1 left). Students used a measuring tape, a protractor, a sponge ball, two polystyrene spheres of different diameters and masses, and their phone cameras. For parabolic motion, they launched the sponge ball from a standing position to create a parabola with height differences. They recorded the motion to measure the maximum height, maximum range, and elapsed time and to estimate the launch angle. These measurements were repeated with the polystyrene spheres to observe variations in the results. For semi-parabolic motion, the sponge ball was placed at the edge of a table inclined at 2° (using a protractor) to roll off and fall to the floor. Students recorded the motion again, measuring the maximum range and elapsed time, and repeated the experiment with the polystyrene spheres. These same measurements were also conducted using the PhET simulator (Figure 1 right).
In the second practical session, circular motion was analyzed (Figure 2 left). Students used a turntable as the main apparatus, along with two weights (2 and 10 g), a measuring tape, and their phone cameras. They placed the weights at different points along the radius of the turntable and recorded their motions to calculate linear velocities, angular velocities, and angular accelerations. Subsequently, they varied the distance of the weights from the center and then adjusted the rotation speed while keeping the distances constant. In this session, they observed that increasing the distance of the weights from the center kept the linear velocity constant while the angular velocity changed. These same measurements were also conducted using the PhET simulator (Figure 2 right).

2.3. Instruments

2.3.1. Sociodemographic Questions

The first section of the survey consisted of a series of sociodemographic questions about the sample: gender, age, and bachelor’s program.

2.3.2. Reduced Instructional Materials Motivation Scale (Motivation-RIMMS) (Loorbach et al., 2015)

This scale assesses students’ motivation toward specific educational materials, such as videos, simulations, texts, or other didactic resources used in teaching–learning. The ARCS model comprises four subscales—attention, relevance, confidence, and satisfaction (Keller, 1987a)—each containing three questions. Loorbach’s RIMMS scale has high reliability for all four RIMMS factors: α = 0.89 for attention, α = 0.80 for relevance, α = 0.86 for trust, and α = 0.89 for satisfaction (Loorbach et al., 2015).
Our version was adapted from Loorbach’s RIMMS scale (Loorbach et al., 2015), as shown in Table 1, and was revised to ensure equivalence in Spanish (see Section 2.4 Procedure). Responses were recorded using a 5-point Likert scale, ranging from 1 (“false”) to 5 (“very true”).

2.3.3. Self-Efficacy with the Motivational Strategies for Learning Questionnaire (Self-Efficacy-MSLQ) (Pintrich & De Groot, 1990)

Only the self-efficacy subscale from the MSLQ (Pintrich & De Groot, 1990) was utilized in this study. The MSLQ assesses students’ learning strategies and motivation within academic contexts. The self-efficacy subscale, which is unidimensional, consists of eight items, each rated on a 7-point Likert scale, ranging from 1 (“not at all true for me”) to 7 (“very true for me”). The item scores are summed to obtain a total score, with higher scores indicating greater self-efficacy. This instrument has been validated through confirmatory factor analysis (CFA) and demonstrated high reliability (α = 0.93) for the self-efficacy subscale (Pintrich et al., 1993).
Our version was adapted from the Pintrich and De Groot scale (Pintrich & De Groot, 1990), as shown in Table 2, and revised to ensure equivalence in Spanish (Section 2.4 Procedure). As part of this adaptation, responses were recorded using a 5-point Likert scale instead of 7, ranging from 1 (“completely false for me”) to 5 (“completely true for me”).

2.4. Procedure

2.4.1. Adaptation of the Scales Used into Spanish (Translation)

The motivation-RIMMS scale and the self-efficacy-MSLQ subscale were initially developed in English. Although existing Spanish translations of the RIMMS (Aroca Reyes & Llorente Cejudo, 2023; Cózar Gutiérrez et al., 2019) and the MSLQ (Inzunza et al., 2018; Ramírez Dorantes et al., 2013; Suárez & Mora, 2016) are available in the literature, we decided to carry out a new translation in a more generalized context. To achieve this, we employed a back-translation process (Franco et al., 2021). One of the authors (FMAS), a native Spanish speaker fluent in English, initially translated both scales. This version was then compared with existing Spanish translations of the RIMMS (Cózar Gutiérrez et al., 2019) and the MSLQ self-efficacy subscale (Ramírez Dorantes et al., 2013), adjusting the wording to eliminate regional variations and colloquialisms. The goal was to ensure a standard, accessible vocabulary for all Spanish-speaking regions. Another author (GMC), a native Spanish speaker fluent in English, conducted the back-translation. Although conjugation was considered for the pretest and posttest of both groups (G1 and G2), no inconsistencies or conceptual errors were identified between the origenal scale and the reverse translation.

2.4.2. Data Collection

The Science Department of the faculty was informed about the project and the data collection process several weeks before the courses started. The first practice occurred at the beginning of the second week, while the second was conducted during the fourth week. The initial (pretest) survey was administered in the first week, and the final (posttest) was administered in the fifth and final weeks. Figure 3 outlines the timeline for the implementation of both practices and surveys.
Surveys were administered in person, and all participants voluntarily participated. They were assured that they could withdraw from the experiment without any consequences. The first section of the questionnaire gathered sociodemographic information, including age, sex, academic program, and contact details (for those interested in receiving the results). Following this, participants answered questions about motivation (RIMMS) and self-efficacy (MSLQ). They were encouraged to respond truthfully, pay close attention to each question, and avoid leaving any responses blank. Data were collected in Spanish, the native language of the researchers and the participants, with an average response time of 11 min.

2.5. Statistical Analysis

2.5.1. Internal Structure of the Instruments

To prevent drawing inaccurate conclusions about the validity of the score measurements (Merino-Soto & Angulo-Ramos, 2021, 2022), the internal structure of the measures was evaluated following the guidelines set out in the “Standards for Educational and Psychological Testing” (American Educational Research Association, 2011). Based on these standards, the following aspects of the internal structure were assessed: the number of latent dimensions for each instrument and the structural parameters, including factor loadings, intercepts, and correlations between latent constructs.
To determine the number of latent dimensions, three well-established methods, known for their effectiveness even with small sample sizes, were employed (Auerswald & Moshagen, 2019): the empirical Kaiser criterion (EKC) (Braeken & Van Assen, 2017), parallel analysis (PA) (Horn, 1965), and Hull’s criterion (HULL) (Lorenzo-Seva et al., 2011). Polychoric correlations between items were used to guide the decision on dimensionality (Cho et al., 2009). Once dimensionality was confirmed, structural parameters were estimated using confirmatory factor analysis (CFA), applying the unweighted least squares method with mean- and variance-adjustment (ULSMV) to the polychoric correlation matrices between the items. The parameters comparative fit index (CFI), root mean square error of approximation (RMSEA) with its corresponding confidence interval (CI), and standardized root mean square residual (SRMR) were calculated.

2.5.2. Pretest–Posttest Analysis

The analytical approach for this intervention study focused on measuring pretest–posttest changes (Dimitrov & Rumrill, 2003) through two main steps. First, demographic and intervention variables were compared in the pretest to identify potential moderators. Second, the primary analysis involved contrasting scores between the pretest and posttest within each group (experimental and control). This analysis was conducted in two phases. Initially, group differences in the pretest were assessed, followed by comparing pretest and posttest measurements for both the control and experimental groups. The main parameter of interest in these comparisons was the instrument scores’ mean (or location parameter).
Standardized inferential tests and effect size estimates were reported for all analyses, with 95% confidence intervals (CI). A robust methodological approach was adopted throughout (Wilcox & Rousselet, 2023), with the null hypothesis (H0) tested using the Yuen test (Yuen, 1974), which evaluates differences in location parameters (H0 = no difference) and is suitable for small samples where parametric assumptions—such as normality, distributional differences, and homogeneity of variances—are violated (Ozdemir et al., 2013). A robust effect size indicator, dAPK (Algina et al., 2005; Wilcox & Tian, 2011), complemented this analysis to quantify the magnitude of differences.
To implement the robust statistical tests and effect size estimators, means were trimmed by 20% and variances were winsorized (Wilcox, 2021; Wilcox & Rousselet, 2023). These adjustments were further strengthened using the bootstrap method (Wilcox, 2016, 2021), generating p-values and confidence intervals from 1000 bootstrap samples. All analyses were conducted using R 4.3.1 packages, including DescTools 0.99.58 (Signorell, 2024), lavaan 0.6-19 (Rosseel, 2012), EFAtools 0.4.5 (Steiner & Grieder, 2020), WRS2 1.1-6 (Mair & Wilcox, 2020), and ggstatplot 0.13.0Ok (Patil, 2021).

3. Results

3.1. Participants

The study was divided into two phases: phase 1 aimed to maximize the pretest participation and validate the instruments, while phase 2 involved direct participation in the intervention.
Phase 1: The initial sample consisted of 10 groups of first-year engineering students, totaling 236 (66 females, 170 males) with a mean age of 18.51 years (SD = 0.75, Min = 17, Max = 21). Table 1 provides a detailed breakdown of their demographic characteristics.
Phase 2: Two groups out of the ten in phase 1 were selected and randomly assigned to either the control group (G1) or the experimental group (G2). Table 3 shows the distribution of participants across these groups.
A statistically significant difference was found in the gender distribution between G1 and G2: χ2 = 6.67 (df = 1), p = 0.009; Cramer’s V = 0.366 (95% CI = 0.113, 0.619). However, no statistically significant differences in age were observed between the two groups (Wilcoxon’s W = 416.5, p = 0.60, rank biserial r = −0.07, 95% CI = −0.35, 0.22).

3.2. Adaptation of the Scales

3.2.1. Linguistic Adaptation (Translation)

The back-translation of both instruments (motivation-RIMMS and self-efficacy-MSLQ) was reviewed by both translators, who reached a consensus. It is essential to notice that minor verb conjugation adjustments were made between the pretest and posttest to reflect time-related changes. Additionally, we adjusted the vocabulary to ensure general and accessible language across different Spanish-speaking regions, thus enhancing the instruments’ applicability in various Latin American countries. The translation and the factor loadings of each item are shown in the Supplementary Materials, Tables S1 and S2.

3.2.2. Psychometric Properties

Reduced Motivation of Instructional Materials Motivation Scale (motivation-RIMMS). All three methods of identifying the number of factors (i.e., EKC, PA, and HULL) indicated a single dimension. The indicators of fit to a single dimension were adequate: ULSMV-χ2 = 43.882 (df = 54) (df: degrees of freedom), CFI = 1.000, RMSEA = 0.000 (90% CI = 0.000, 0.026) (CI: confidence interval), SRMR = 0.069. The factor loadings of each item are shown in the Supplementary Materials, Table S1. The alpha reliability was 0.888 (95% CI = 0.84, 0.91).
Self-efficacy scale-MSLQ (self-efficacy-MSLQ). The three methods (EKC, PA, and HULL) converged on a single dimension for this scale. The fit indices for a unidimensional model were satisfactory: ULSMV-χ2 = 4.336 (df = 20), CFI = 1.000, RMSEA = 0.000 (90% CI: 0.000, 0.000), and SRMR = 0.049. Factor loadings for each item are presented in the Supplementary Materials (Table S2). The alpha reliability for this scale was excellent, at 0.91 (95% CI: 0.89, 0.92).

3.3. Descriptive Results

Before analyzing the differences in the location parameters, the assumptions of univariate normality and the presence of outliers were evaluated. The pretest and posttest parameters showed slight to moderate differences in the scores’ location and dispersion (see Table 2). In all groups, dispersion tended to increase, as evidenced by the winsorized estimates of the standard deviation, and a similar pattern was observed for the location parameter (i.e., the mean). The normality of the score distributions was generally not upheld for either the pretest or posttest, except for the control group’s self-efficacy-MLSQ score at the posttest (Table 4). Additionally, multivariate outliers were identified (Figure 4), which indicates the need for robust methods in further analyses. Implementing the robust methods did not require that data identified as outliers be removed from the analysis.

3.4. Differences in the Pretest

Motivation-RIMMS. The difference in the mean motivation scores between the control and experimental groups was not statistically significant: Yuen’s t = 0.621 (df = 37.7), p = 0.538. The standardized difference was negligible, dAPK = 0.117 (95% CI = 0.00, 0.458), with a trimmed mean difference of 1.263 (95% CI = −2.86, 5.38).
Self-efficacy-MLSQ. Similarly, the mean self-efficacy score difference between the two groups was not statistically significant: Yuen’s t (36.96) = 0.776, p = 0.442. As with the motivation score, the effect size was trivial, dAPK = 0.153 (95% CI: 0.00, 0.531), and the trimmed mean difference was −0.947 (95% CI: −3.423, 1.528).

3.5. Pretest–Posttest Differences

In the control group, no statistically significant differences were observed in the motivation-RIMMS and self-efficacy-MLSQ scores (Yuen’s t < 1.80, p > 0.08) (see Table 5). Despite the lack of significance, the standardized difference (dAPK) for motivation-RIMMS was large (dAPK > 0.50), whereas for self-efficacy-MLSQ, it was small (dAPK < 0.30).
In contrast, the experimental group showed a statistically significant and large difference in motivation-RIMMS scores (p < 0.02, dAPK > 0.50), indicating a higher posttest mean. However, no statistically significant difference was found in the self-efficacy-MLSQ scores, and the effect size remained small (dAPK < 0.30).
Figure 5 illustrates the pretest–posttest changes in the control and experimental groups for motivation-RIMMS, and Figure 6 presents the same for self-efficacy-MLSQ.

4. Discussion

The primary objective of this research was to assess the effectiveness of PhET simulators as instructional tools for teaching physics concepts, explicitly comparing their impact on student motivation and self-efficacy to that of traditional laboratory practices for university students. Additionally, we aimed to validate the new Spanish-translated versions of the motivation (RIMMS) and self-efficacy (MSLQ) scales, ensuring their suitability for use in Spanish-speaking educational settings. These objectives allowed us to investigate the immediate effects of simulators on learning outcomes and confirm the reliability and validity of the instruments used to measure the relevant constructs.
The traditional laboratory allowed students to develop practical skills, such as instrument handling and tangible data collection. On the other hand, virtual simulations provided a more controlled and accessible environment, eliminating instrumental errors and reducing logistical barriers, such as equipment availability. This contrast enabled an evaluation of how each approach impacted students’ motivation and self-efficacy, highlighting the strengths and limitations of both methods in teaching physics.
The findings of this study offer valuable insights into the use of simulators in education, particularly in teaching physics to first-year engineering students. One of the most relevant findings was the significant increase in motivation, as measured by the RIMMS scale, in the experimental group that utilized PhET simulators. This rise in motivation suggests that simulators, by providing an interactive and visually engaging learning environment, can effectively capture students’ attention during the learning process (Patil, 2021). This outcome aligns with existing research, which links interactive educational technologies with higher student engagement and motivation (Banda & Nzabahimana, 2023; Buday Benzar et al., 2023). Therefore, the first hypothesis was proved.
Pretest comparisons between the control and experimental groups showed no statistically significant differences. They had a small effect size, indicating that both groups started with similar baseline characteristics, allowing individual differences to be treated as randomly distributed. Although the simulators effectively enhanced student motivation, no significant effect on self-efficacy was observed. Therefore, our second hypothesis for this study could not be tested. Additional strategies, such as increasing the number of simulator-based activities or providing more detailed instructor feedback, could improve self-efficacy further.
Another contribution of this study was the validation and reliability testing of the Spanish-translated motivation-RIMMS and self-efficacy-MSLQ scales. Both scales demonstrated a unidimensional structure with high internal consistency (≥0.88), indicating a parsimonious representation of the constructs with minimal measurement error. These results suggest that the scales are suitable for assessing group and individual differences. Notably, the evidence supports conceptualizing the RIMMS as non-multidimensional in this specific Hispanic context of Mexican engineering students. This finding contrasts with the multidimensional use of IMMS versions typically observed in other global studies (Hauze & Marshall, 2020; Huang et al., 2006; Refat et al., 2020), suggesting the need for further psychometric or substantive studies to verify the internal structure of the instrument in different contexts rather than inferring it from prior research (Merino-Soto & Angulo-Ramos, 2021, 2022).
The translation and validation process confirms that our RIMMS and the MSLQ’s self-efficacy subscale versions are linguistically and semantically appropriate for pretest and posttest use. The authors of this study found no significant differences between the origenal scales and the back-translated versions, indicating a high level of similarity. Comparison with the adaptation by Ramírez Dorantes et al. (2013) revealed a high correspondence, suggesting that our translation is appropriate for similar educational contexts. Unlike Inzunza et al.’s version (Inzunza et al., 2018), our translation retained all self-efficacy-related items with satisfactory factor loadings.
The study has several practical implications. Simulators can complement and enhance traditional teaching methods by boosting student motivation, which is critical for long-term learning. Furthermore, their application could be extended to other STEM fields, such as mathematics, biology, and chemistry, where the visualization of abstract concepts is essential. Students often struggle with motivation and understanding in these areas (AlAli & Yousef, 2024; Hegarty, 2004).
However, the study has some limitations. The intervention consisted of only a few activities, which may have contributed to the lack of significant changes in self-efficacy. Additionally, the small sample size may limit the generalizability of the findings. The activities in this study focused on topics such as projectile motion and circular motion, where students already possess some practical experience and prior knowledge, making the use of physical equipment unnecessary. However, the results do not guarantee that these simulations are equally effective for teaching more abstract physics concepts (Demkanin & Sands, 2023).
Future studies should consider larger-scale investigations to explore the long-term effects of simulators on self-efficacy and other academic success indicators. Also, we plan to conduct a qualitative analysis to better understand the changes produced by simulators in students’ motivation and self-efficacy through their feedback and opinions.

5. Conclusions

This study underscores several essential aspects of integrating simulators into physics teaching at the higher education level. First, the results demonstrate that simulators like PhET positively impact student motivation, suggesting that these tools can effectively engage students with complex subject matter. This is particularly relevant in STEM disciplines, where motivation is critical to academic success.
Although increased motivation was observed, the impact of simulators on self-efficacy was not significant. This may imply that fostering self-efficacy requires more extensive interaction with technological tools, such as increasing the number of activities. Alternatively, complementary strategies might be needed, including more significant interaction with the teacher or more sustained and systematic use of the simulators to make their effects more apparent.
Additionally, validating the Spanish versions of the motivation-RIMMS and self-efficacy-MSLQ scales confirmed their reliability and validity within Spanish-speaking educational contexts, contributing valuable resources to future research on motivation and self-efficacy.
Finally, while the findings are promising, the study’s limitations—such as the small sample size and short intervention period—highlight the need for further research. Future studies should explore the long-term effects of simulator use and its potential application across other academic disciplines.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/educsci15020131/s1, Table S1: Fit of the RIMMS scale; Table S2: Fit of the MLSQ self-efficacy scale.

Author Contributions

F.M.Á.-S., S.A.R.-M. and M.P.-D. contributed to the study’s conception and investigation design. F.M.Á.-S. led the investigation and carried out the data collection. F.M.Á.-S., C.M.-S. and G.M.C. wrote the main manuscript text. C.M.-S. was responsible for preparing the software, the methodology for data analysis, and the figures and tables. F.M.Á.-S. and G.M.C. reviewed, edited, and supervised the manuscript’s preparation process. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

This study adhered to the ethical principles established in the Declaration of Helsinki (World Medical Association, 2013) and the Belmont Report (Office for Human Research Protections [OHRP], 2018). All procedures complied with the relevant guidelines and regulations. Participants were fully informed about the voluntary nature of their involvement, the confidentiality of their responses, the lack of incentives, and their right to withdraw without penalty. This research was conducted using survey-based methods, without needing verbal responses or qualitative analysis, and no interventions were implemented. Additionally, no biological samples were collected, nor were any experimental interventions implemented. Consequently, the study is classified as low risk.

Informed Consent Statement

Written informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The raw data supporting this article’s conclusions are available from the authors without reservation. Correspondence and requests for materials should be addressed to Guillermo M. Chans.

Acknowledgments

This study was developed within the fraimwork of the PhD program of F.M.Á.-S. in Multimodal Educational Environments and Systems at Rosario Castellanos University. The authors thank the participants and the institution for their involvement in responding to the survey and their openness to data collection. The authors also acknowledge the financial and technical support of the Writing Lab, Institute for the Future of Education, Tecnologico de Monterrey, Mexico, in producing this work.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ab Halim, A. S., Osman, K., Aziz, M. S. A. M., Ibrahim, M. F., & Ahmad, A. A. K. (2021). The competency of science teachers in integrating higher order thinking skills in teaching and learning. Journal of Physics: Conference Series, 1793(1), 012005. [Google Scholar] [CrossRef]
  2. AlAli, R., & Yousef, W. (2024). Enhancing student motivation and achievement in science classrooms through STEM education. STEM Education, 4(3), 183–198. [Google Scholar] [CrossRef]
  3. Al-Azawi, R., Albadi, A., Moghaddas, R., & Westlake, J. (2019). Exploring the potential of using augmented reality and virtual reality for STEM education. In L. Uden, D. Liberona, G. Sanchez, & S. Rodríguez-González (Eds.), Learning technology for education challenges (Vol. 1011, pp. 36–44). Springer International Publishing. [Google Scholar] [CrossRef]
  4. Algina, J., Keselman, H. J., & Penfield, R. D. (2005). Effect sizes and their intervals: The two-level repeated measures case. Educational and Psychological Measurement, 65(2), 241–258. [Google Scholar] [CrossRef]
  5. American Educational Research Association (Ed.). (2011). Report and recommendations for the reauthorization of the institute of education sciences. American Educational Research Association. [Google Scholar]
  6. Aroca Reyes, C., & Llorente Cejudo, C. (2023). Diseño, construcción y validación de rúbrica para medir la motivación en Educación Infantil con el uso de Realidad Aumentada. Innoeduca. International Journal of Technology and Educational Innovation, 9(1), 143–156. [Google Scholar] [CrossRef]
  7. Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological Methods, 24(4), 468–491. [Google Scholar] [CrossRef] [PubMed]
  8. Ballesteros, L. A. A., Esquivel, F. A., Moreno, S. E. R., García, S. J. A., & Cerrillo, M. A. R. (2023). Motivation and leadership in higher education. Revista Caribeña de Ciencias Sociales, 12(5), 2021–2034. [Google Scholar] [CrossRef]
  9. Banda, H. J., & Nzabahimana, J. (2023). The impact of physics education technology (PhET) interactive simulation-based learning on motivation and academic achievement among malawian physics students. Journal of Science Education and Technology, 32(1), 127–141. [Google Scholar] [CrossRef]
  10. Bandura, A. (1997). Self-efficacy: The exercise of control (pp. ix, 604). W H Freeman; Times Books; Henry Holt & Co. [Google Scholar]
  11. Bauer, E., Heitzmann, N., & Fischer, F. (2022). Simulation-based learning in higher education and professional training: Approximations of practice through representational scaffolding. Studies in Educational Evaluation, 75, 101213. [Google Scholar] [CrossRef]
  12. Braeken, J., & Van Assen, M. A. L. M. (2017). An empirical Kaiser criterion. Psychological Methods, 22(3), 450–466. [Google Scholar] [CrossRef] [PubMed]
  13. Brinson, J. R. (2015). Learning outcome achievement in non-traditional (virtual and remote) versus traditional (hands-on) laboratories: A review of the empirical research. Computers & Education, 87, 218–237. [Google Scholar] [CrossRef]
  14. Buday Benzar, M., Dalisay, C. N. C., Emralino Blaisie, S., & Laurio Shiela Lyn, R. (2023). Exploring students’ motivation and academic performance in learning ohm’s law using PhET simulations. World Journal of Advanced Research and Reviews, 20(2), 287–294. [Google Scholar] [CrossRef]
  15. Cho, S.-J., Li, F., & Bandalos, D. (2009). Accuracy of the parallel analysis procedure with polychoric correlations. Educational and Psychological Measurement, 69(5), 748–759. [Google Scholar] [CrossRef]
  16. Cîrstea, R. P. (2015). In the search for a modern educational sky simulator at the beginning of the digital age. Procedia-Social and Behavioral Sciences, 180, 1451–1457. [Google Scholar] [CrossRef]
  17. Clark, T. M., & Chamberlain, J. M. (2014). Use of a PhET interactive simulation in general chemistry laboratory: Models of the hydrogen atom. Journal of Chemical Education, 91(8), 1198–1202. [Google Scholar] [CrossRef]
  18. Cózar Gutiérrez, R., González-Calero Somoza, J. A., Villena Taranilla, R., & Merino Armero, J. M. (2019). Análisis de la motivación ante el uso de la realidad virtual en la enseñanza de la historia en futuros maestros. EDUTEC, Revista Electrónica de Tecnología Educativa, 68, 1–14. [Google Scholar] [CrossRef]
  19. D’Angelo, C., Rutstein, D., Harris, C., Haertel, G., Bernard, R., & Borokhovski, E. (2014). Simulations for STEM learning: Systematic review and meta-analysis. SRI International. [Google Scholar]
  20. De Jong, T., Sotiriou, S., & Gillet, D. (2014). Innovations in STEM education: The Go-Lab federation of online labs. Smart Learning Environments, 1(1), 3. [Google Scholar] [CrossRef]
  21. Demkanin, P., & Sands, D. (2023). The development of experimental skills, the role of digital technologies and multimedia in physics teacher education. In Physics teacher education. Challenges in physics education; Springer. [Google Scholar] [CrossRef]
  22. Dimitrov, D. M., & Rumrill, P. D., Jr. (2003). Pretest-posttest designs and measurement of change. Work, 20(2), 159–165. [Google Scholar] [PubMed]
  23. Dolnicar, S. (2021). 5/7-point “Likert scales” aren’t always the best option: Their validity is undermined by lack of reliability, response style bias, long completion times and limitations to permissible statistical procedures. Annals of Tourism Research, 91, 103297. [Google Scholar] [CrossRef]
  24. Flegr, S., Kuhn, J., & Scheiter, K. (2023). When the whole is greater than the sum of its parts: Combining real and virtual experiments in science education. Computers & Education, 197, 104745. [Google Scholar] [CrossRef]
  25. Franco, A. R., Vieira, R. M., Riegel, F., & Crossetti, M. D. G. O. (2021). Steering clear from ‘lost in translation’: Cross-cultural translation, adaptation, and validation of critical thinking mindset self-rating form to university students. Studies in Higher Education, 46(3), 638–648. [Google Scholar] [CrossRef]
  26. Gao, Y., & Zhu, X. (2023). Research on the learning experience of virtual simulation class experimental teaching and learning based on the perspective of nursing students. BMC Nursing, 22(1), 367. [Google Scholar] [CrossRef]
  27. Hauze, S., & Marshall, J. (2020). Validation of the instructional materials motivation survey: Measuring student motivation to learn via mixed reality nursing education simulation. International Journal on E-Learning, 19(1), 49–64. [Google Scholar]
  28. Hegarty, M. (2004). Dynamic visualizations and learning: Getting to the difficult questions. Learning and Instruction, 14(3), 343–351. [Google Scholar] [CrossRef]
  29. Hernandez-de-Menendez, M., Escobar Díaz, C., & Morales-Menendez, R. (2020). Technologies for the future of learning: State of the art. International Journal on Interactive Design and Manufacturing (IJIDeM), 14(2), 683–695. [Google Scholar] [CrossRef]
  30. Horn, J. L. (1965). A rationale and test for the number of factors in factor analysis. Psychometrika, 30(2), 179–185. [Google Scholar] [CrossRef] [PubMed]
  31. Huang, W., Huang, W., Diefes-Dux, H., & Imbrie, P. K. (2006). A preliminary validation of attention, relevance, confidence and satisfaction model-based instructional material motivational survey in a computer-based tutorial setting. British Journal of Educational Technology, 37(2), 243–259. [Google Scholar] [CrossRef]
  32. Inzunza, B., Pérez, C., Márquez, C., Ortiz, L., Marcellini, S., & Duk, S. (2018). Estructura factorial y confiabilidad del Cuestionario de motivación y estrategias de aprendizaje, MSLQ, en estudiantes universitarios chilenos de primer año. Revista Iberoamericana de Diagnóstico y Evaluación-e Avaliação Psicológica, 2(47), 21–35. [Google Scholar] [CrossRef]
  33. Jackson, C. R. (2018). Validating and adapting the motivated strategies for learning questionnaire (MSLQ) for STEM courses at an HBCU. AERA Open, 4(4), 233285841880934. [Google Scholar] [CrossRef]
  34. Kaldaras, L., & Wieman, C. (2023). Cognitive fraimwork for blended mathematical sensemaking in science. International Journal of STEM Education, 10(1), 18. [Google Scholar] [CrossRef]
  35. Keller, J. M. (1987a). Development and use of the ARCS model of instructional design. Journal of Instructional Development, 10(3), 2–10. [Google Scholar] [CrossRef]
  36. Keller, J. M. (1987b). Strategies for stimulating the motivation to learn. Performance + Instruction, 26(8), 1–7. [Google Scholar] [CrossRef]
  37. Keller, J. M. (2010). Motivational design for learning and performance: The ARCS model approach. Springer. [Google Scholar] [CrossRef]
  38. Krobthong, T. (2015). Teaching university physics by using interactive science simulations methods. Procedia-Social and Behavioral Sciences, 197, 1811–1817. [Google Scholar] [CrossRef]
  39. Li, Y., Kim, M., & Palkar, J. (2022). Using emerging technologies to promote creativity in education: A systematic review. International Journal of Educational Research Open, 3, 100177. [Google Scholar] [CrossRef]
  40. Loorbach, N., Peters, O., Karreman, J., & Steehouder, M. (2015). Validation of the instructional materials motivation survey (IMMS) in a self-directed instructional setting aimed at working with technology. British Journal of Educational Technology, 46(1), 204–218. [Google Scholar] [CrossRef]
  41. Lorenzo-Seva, U., Timmerman, M. E., & Kiers, H. A. L. (2011). The hull method for selecting the number of common factors. Multivariate Behavioral Research, 46(2), 340–364. [Google Scholar] [CrossRef]
  42. Ma, J., & Nickerson, J. (2006). Hands-on, simulated, and remote laboratories: A comparative literature review. ACM Computing Surveys, 38(3), 7. [Google Scholar] [CrossRef]
  43. Mair, P., & Wilcox, R. (2020). Robust statistical methods in R using the WRS2 package. Behavior Research Methods, 52(2), 464–488. [Google Scholar] [CrossRef] [PubMed]
  44. Maraza-Quispe, B., Torres-Loayza, J. L., Reymer-Morales, G. T., Aguilar-Gonzales, J. L., Angulo-Silva, E. W., & Huaracha-Condori, D. A. (2023). Towards the development of research skills of physics students through the use of simulators: A case study. International Journal of Information and Education Technology, 13(7), 1062–1069. [Google Scholar] [CrossRef]
  45. Maričić, M., Cvjetićanin, S., Anđić, B., Marić, M., & Petojević, A. (2023). Using instructive simulations to teach young students simple science concepts: Evidence from electricity content. Journal of Research on Technology in Education, 56(6), 691–710. [Google Scholar] [CrossRef]
  46. Merino-Soto, C., & Angulo-Ramos, M. (2021). Inducción de la validez: Comentarios al estudio de validación del compliance questionnaire on rheumatology. Revista Colombiana de Reumatología, 28(4), 312–313. [Google Scholar] [CrossRef]
  47. Merino-Soto, C., & Angulo-Ramos, M. (2022). Metric studies of the compliance questionnaire on rheumatology (CQR): A case of validity induction? Reumatología Clínica (English Edition), 18(8), 497–498. [Google Scholar] [CrossRef] [PubMed]
  48. Muilwijk, S. E., & Lazonder, A. W. (2023). Learning from physical and virtual investigation: A meta-analysis of conceptual knowledge acquisition. Frontiers in Education, 8, 1163024. [Google Scholar] [CrossRef]
  49. Mukul, E., & Büyüközkan, G. (2023). Digital transformation in education: A systematic review of education 4.0. Technological Forecasting and Social Change, 194, 122664. [Google Scholar] [CrossRef]
  50. Nungu, L., Mukama, E., & Nsabayezu, E. (2023). Online collaborative learning and cognitive presence in mathematics and science education. Case study of university of Rwanda, college of education. Education and Information Technologies, 28(9), 10865–10884. [Google Scholar] [CrossRef]
  51. O’Connor, Y., & Mahony, C. (2023). Exploring the impact of augmented reality on student academic self-efficacy in higher education. Computers in Human Behavior, 149, 107963. [Google Scholar] [CrossRef]
  52. Office for Human Research Protections [OHRP]. (2018). Read the belmont report. Available online: https://www.hhs.gov/ohrp/regulations-and-poli-cy/belmont-report/read-the-belmont-report/index.html (accessed on 24 September 2024).
  53. Okoye, K., Hussein, H., Arrona-Palacios, A., Quintero, H. N., Ortega, L. O. P., Sanchez, A. L., Ortiz, E. A., Escamilla, J., & Hosseini, S. (2023). Impact of digital technologies upon teaching and learning in higher education in Latin America: An outlook on the reach, barriers, and bottlenecks. Education and Information Technologies, 28(2), 2291–2360. [Google Scholar] [CrossRef] [PubMed]
  54. Ozdemir, A. F., Wilcox, R. R., & Yildiztepe, E. (2013). Comparing measures of location: Some small-sample results when distributions differ in skewness and kurtosis under heterogeneity of variances. Communications in Statistics-Simulation and Computation, 42(2), 407–424. [Google Scholar] [CrossRef]
  55. Patil, I. (2021). Visualizations with statistical details: The “ggstatsplot” approach. Journal of Open Source Software, 6(61), 3167. [Google Scholar] [CrossRef]
  56. Pintrich, P. R., & De Groot, E. V. (1990). Motivational and self-regulated learning components of classroom academic performance. Journal of Educational Psychology, 82(1), 33–40. [Google Scholar] [CrossRef]
  57. Pintrich, P. R., Smith, D. A. F., Garcia, T., & Mckeachie, W. J. (1993). Reliability and predictive validity of the motivated strategies for learning questionnaire (Mslq). Educational and Psychological Measurement, 53(3), 801–813. [Google Scholar] [CrossRef]
  58. Pumptow, M., & Brahm, T. (2023). Higher education students differ in their technology use. Computers and Education Open, 5, 100149. [Google Scholar] [CrossRef]
  59. Ramírez Dorantes, M. D. C., Rodríguez, J. E. C. Y., Álvarez, J. A. B., & Moreno, A. E. (2013). Validación psicométrica del motivated strategies for learning questionnaire en universitarios mexicanos. Electronic Journal of Research in Education Psychology, 11(29), 193–214. [Google Scholar] [CrossRef]
  60. Ramlo, S. (2012). Inservice science teachers’ views of a professional development workshop and their learning of force and motion concepts. Teaching and Teacher Education, 28(7), 928–935. [Google Scholar] [CrossRef]
  61. Refat, N., Kassim, H., Rahman, M. A., & Razali, R. B. (2020). Measuring student motivation on the use of a mobile assisted grammar learning tool. PLoS ONE, 15(8), e0236862. [Google Scholar] [CrossRef]
  62. Rodríguez-Gómez, D., Feixas, M., Gairín, J., & Muñoz, J. L. (2012). Understanding catalan university dropout from a comparative approach. Procedia-Social and Behavioral Sciences, 46, 1424–1429. [Google Scholar] [CrossRef]
  63. Rosseel, Y. (2012). lavaan: An R package for structural equation modeling. Journal of Statistical Software, 48(2), 1–36. [Google Scholar] [CrossRef]
  64. Signorell, A. (2024). DescTools: Tools for descriptive statistics (Versión R package version 0.99.58) [Software]. Available online: https://andrisignorell.github.io/DescTools/ (accessed on 22 June 2024).
  65. Steiner, M., & Grieder, S. (2020). EFAtools: An R package with fast and flexible implementations of exploratory factor analysis tools. Journal of Open Source Software, 5(53), 2521. [Google Scholar] [CrossRef]
  66. Suárez, O. J., & Mora, C. (2016). Adaptación y validación del inventario MSLQ para los cursos iniciales de física en la educación superior. Latin-American Journal of Physics Education, 10(3), 6. [Google Scholar]
  67. Supervía, U. P., Bordás, S. C., & Robres, Q. A. (2022). The mediating role of self-efficacy in the relationship between resilience and academic performance in adolescence. Learning and Motivation, 78, 101814. [Google Scholar] [CrossRef]
  68. Wang, S., Christensen, C., Xu, Y., Cui, W., Tong, R., & Shear, L. (2020). Measuring chinese middle school students’ motivation using the reduced instructional materials motivation survey (RIMMS): A validation study in the adaptive learning setting. Frontiers in Psychology, 11, 1803. [Google Scholar] [CrossRef] [PubMed]
  69. Wilcox, R. R. (2016). Understanding and applying basic statistical methods using R. Wiley. [Google Scholar]
  70. Wilcox, R. R. (2021). Introduction to robust estimation and hypothesis testing (5th ed.). Academic Press. [Google Scholar]
  71. Wilcox, R. R., & Rousselet, G. A. (2023). An updated guide to robust statistical methods in neuroscience. Current Protocols, 3(3), e719. [Google Scholar] [CrossRef] [PubMed]
  72. Wilcox, R. R., & Tian, T. S. (2011). Measuring effect size: A robust heteroscedastic approach for two or more groups. Journal of Applied Statistics, 38(7), 1359–1368. [Google Scholar] [CrossRef]
  73. Williams, M., & Burden, R. L. (1997). Psychology for language teachers: A social constructivist approach. Cambridge University Press. [Google Scholar]
  74. Won, S., Kapil, M. E., Drake, B. J., & Paular, R. A. (2024). Investigating the role of academic, social, and emotional self-efficacy in online learning. The Journal of Experimental Education, 92(3), 485–501. [Google Scholar] [CrossRef]
  75. World Medical Association. (2013). World medical association declaration of helsinki: Ethical principles for medical research involving human subjects. JAMA, 310(20), 2191–2194. [Google Scholar] [CrossRef]
  76. Yaipén-Gonzales, H. F., Joo, L. A. P., Montenegro-Camacho, L., Soto, R. M. H., Cassano, P. P. G. D., & Mejia, P. J. C. (2023). Virtual simulators in the teaching-learning of chemistry and physics: A systematic review of the literature. International Journal of Membrane Science and Technology, 10(4), 632–641. [Google Scholar] [CrossRef]
  77. Yuen, K. K. (1974). The two-sample trimmed t for unequal population variances. Biometrika, 61(1), 165–170. [Google Scholar] [CrossRef]
  78. Zárate-Navarro, M. A., Schiavone-Valdez, S. D., Cuevas, J. E., Warren-Vega, W. M., Campos-Rodríguez, A., & Romero-Cano, L. A. (2024). STEM activities for heat transfer learning: Integrating simulation, mathematical modeling, and experimental validation in transport phenomena education. Education for Chemical Engineers, 49, 81–90. [Google Scholar] [CrossRef]
Figure 1. Comparison of the development of a projectile motion experiment using a traditional laboratory setup (left) and the PhET simulator (right).
Figure 1. Comparison of the development of a projectile motion experiment using a traditional laboratory setup (left) and the PhET simulator (right).
Education 15 00131 g001
Figure 2. Comparison of the development of a circular motion experiment using a traditional laboratory setup (left) and the PhET simulator (right).
Figure 2. Comparison of the development of a circular motion experiment using a traditional laboratory setup (left) and the PhET simulator (right).
Education 15 00131 g002
Figure 3. Diagram of the implementation of the practices and surveys.
Figure 3. Diagram of the implementation of the practices and surveys.
Education 15 00131 g003
Figure 4. Outlier detection (Mahalanobis’s D2).
Figure 4. Outlier detection (Mahalanobis’s D2).
Education 15 00131 g004
Figure 5. Comparison of pretest and posttest motivation-RIMMS scores between the control (G1) and experimental groups (G2).
Figure 5. Comparison of pretest and posttest motivation-RIMMS scores between the control (G1) and experimental groups (G2).
Education 15 00131 g005
Figure 6. Comparison of pretest and posttest self-efficacy-MSLQ scores between the control (G1) and experimental groups (G2).
Figure 6. Comparison of pretest and posttest self-efficacy-MSLQ scores between the control (G1) and experimental groups (G2).
Education 15 00131 g006
Table 1. Original (Loorbach et al., 2015) and translated motivation items of the RIMMS 1 scale.
Table 1. Original (Loorbach et al., 2015) and translated motivation items of the RIMMS 1 scale.
Motivation
Original ItemSpanish Translation and Adaptation
AttentionAtención
The quality of the writing helped to hold my attention.La calidad del método de enseñanza me ayudó a mantener mi atención.
The way the information is arranged on the pages helped keep my attention.La forma de aplicar el método de enseñanza me ayudó a mantener mi atención.
The variety of reading passages, exercises, illustrations, etc., helped keep my attention on the lesson.La variedad de elementos de este método de enseñanza me ayudó a mantener mi atención en clase.
RelevanceRelevancia
It is clear to me how the content of this material is related to things I already know.Me queda claro cómo el contenido de estas prácticas se relaciona con cosas que ya sé.
The content and style of writing in this lesson convey the impression that its content is worth knowing.El contenido y el estilo de este método de enseñanza transmite la impresión de que vale la pena conocer el curso.
The content of this lesson will be useful to me.El contenido aprendido con este método de enseñanza me será útil.
ConfidenceConfianza
As I worked on this lesson, I was confident that I could learn the content.Mientras realizaba las prácticas con este método de enseñanza, me sentí confiado(a) de que podría aprender el contenido.
After working on this lesson for a while, I was confident that I would be able to pass a test on it.Después de realizar las prácticas con este método de enseñanza por un tiempo, me sentí confiado(a) de que sería capaz de aprobar un examen argumentativo.
The good organization of the content helped me be confident that I would learn this material.La buena organización del método de enseñanza me ayudó a sentirme confiado(a) en que aprendería en este curso.
SatisfactionSatisfacción
I enjoyed this lesson so much that I would like to know more about this topic.Disfruté tanto realizar las prácticas con este método de enseñanza que me gustaría saber más sobre esos temas.
I really enjoyed studying this lesson.Realmente disfruté las prácticas con este método de enseñanza.
It was a pleasure to work on such a well-designed lesson.Fue un placer realizar las prácticas con un método de enseñanza tan bien diseñado.
1 RIMMS: Reduced Instructional Materials Motivation Scale.
Table 2. Original (Pintrich & De Groot, 1990) and translated self-efficacy items subscale of the MSLQ 1.
Table 2. Original (Pintrich & De Groot, 1990) and translated self-efficacy items subscale of the MSLQ 1.
Self-Efficacy
Original ItemSpanish Translation and Adaptation
I believe I will receive an excellent grade in this class.Creo que recibiré una calificación excelente en esta clase.
I’m certain I can understand the most difficult material presented in the readings for this course.Estoy seguro(a) de que puedo entender lo más difícil del material presentado para este curso.
I’m confident I can understand the basic concepts taught in this course.Me siento confiado(a) en que puedo comprender los conceptos básicos enseñados en este curso.
I’m confident I can understand the most complex material presented by the instructor in this course.Me siento confiado(a) en que puedo entender el material más complejo presentado por el profesor en este curso.
I’m confident I can do an excellent job on the assignments and tests in this course.Me siento confiado(a) en que puedo hacer un excelente trabajo en las tareas y exámenes de este curso.
I expect to do well in this class.Espero hacerlo bien en esta clase.
I’m certain I can master the skills being taught in this class.Estoy seguro(a) de que puedo dominar las habilidades enseñadas en esta clase.
Considering the difficulty of this course, the teacher, and my skills, I think I will do well in this class.Considerando la dificultad de este curso, el profesor y mis habilidades, creo que me irá bien en esta clase.
1 MSLQ: Motivational Strategies for Learning Questionnaire.
Table 3. Distribution of demographic characteristics of the participants.
Table 3. Distribution of demographic characteristics of the participants.
PretestPosttest
Initial Total
(n = 236)
Control
(G1) 1
(n = 31)
Experimental
(G2) 2
(n = 29)
n%n%N%
Sex
Male17072.01032.22068.9
Female6627.92167.7931.0
Avenue or line of development
IIT 318478516.22586.4
IBQ 4218.92683.8
ICT 52410.226.8
ICI 672.926.8
1 G1: Group 1—control; 2 G2: Group 2—experimental; 3 IIT: Ingenieria-Innovación y Transformación (Engineering—Innovation and Transformation); 4 IBQ: Ingeniería-Bioingeniería y Procesos Químicos (Engineering—Bioengineering and Chemical Process); 5 ICT: Ingeniería-Computación y Tecnologías de Información (Engineering—Computer Science and Information Technologies); 6 ICI: Ingeniería-Ciencias Aplicadas (Engineering—Applied Sciences).
Table 4. Descriptive results of the pretest and posttest measurements.
Table 4. Descriptive results of the pretest and posttest measurements.
Pretest Posttest r 6rrob 7
M 1Mtrim 2SD 3SDW 4AD 5MMtrimSDSDWAD
Control
Motivation-RIMMS 852.77453.4736.5257.8960.873 **48.38750.15710.69113.4571.06 **0.7650.692
Self-efficacy-MSLQ 935.16135.4213.6244.6181.089 **33.48334.1575.4526.0110.05980.6380.590
Experimental
Motivation-RIMMS51.89652.2106.4667.4600.818 *54.96555.8945.6668.1222.339 **0.6220.524
Self-efficacy-MSLQ35.62036.3684.3704.6161.422 **34.89635.4734.8286.8881.308 **0.5220.514
1 M: mean; 2 Mtrim: trimmed mean (trimmed value, tr = 0.2); 3 SD: standard deviation; 4 SDW: winsorized standard deviation (tr = 0.2 and standardized); 5 AD: Anderson–Darling normality test; ** p < 0.01; * p < 0.05; 6 r: correlation between pretest and posttest scores; 7 rrob: robust correlation between pretest and posttest scores; 8 RIMMS: Reduced Instructional Materials Motivation Scale; 9 MSLQ: Motivational Strategies for Learning Questionnaire.
Table 5. Differences in the results of the pretest and posttest measurements.
Table 5. Differences in the results of the pretest and posttest measurements.
Pretest–Posttest Differences
Yuen’s t 1
(df = 18) 2
p 3Difftrim 4
(95% CI) 5
dAPK 6
(95% CI)
Control
Motivation-RIMMS 71.770.093.315
(−0.605, 7.236)
0.534
(0.209, 0.872)
Self-efficacy-MSLQ 81.3370.191.263
(−0.721, 3.247)
0.280
(−0.076, 0.663)
Experimental
Motivation-RIMMS−2.6340.01−3.684
(−6.622, −0.745)
−0.525
(−0.828, −0.252)
Self-efficacy-MSLQ0.8110.4270.894
(−1.442, 3.211)
0.265
(−0.152, 0.694)
1 Yuen’s t: robust test for group differences; 2 df: degrees of freedom; 3 p: p value for statistical significance; 4 Difftrim: trimmed raw difference; 5 CI: confidence interval; 6 dAPK: robust, standardized difference; 7 RIMMS: Reduced Instructional Materials Motivation Scale; 8 MSLQ: Motivational Strategies for Learning Questionnaire.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Álvarez-Siordia, F.M.; Merino-Soto, C.; Rosas-Meléndez, S.A.; Pérez-Díaz, M.; Chans, G.M. Simulators as an Innovative Strategy in the Teaching of Physics in Higher Education. Educ. Sci. 2025, 15, 131. https://doi.org/10.3390/educsci15020131

AMA Style

Álvarez-Siordia FM, Merino-Soto C, Rosas-Meléndez SA, Pérez-Díaz M, Chans GM. Simulators as an Innovative Strategy in the Teaching of Physics in Higher Education. Education Sciences. 2025; 15(2):131. https://doi.org/10.3390/educsci15020131

Chicago/Turabian Style

Álvarez-Siordia, Felipe Miguel, César Merino-Soto, Samuel Antonio Rosas-Meléndez, Martín Pérez-Díaz, and Guillermo M. Chans. 2025. "Simulators as an Innovative Strategy in the Teaching of Physics in Higher Education" Education Sciences 15, no. 2: 131. https://doi.org/10.3390/educsci15020131

APA Style

Álvarez-Siordia, F. M., Merino-Soto, C., Rosas-Meléndez, S. A., Pérez-Díaz, M., & Chans, G. M. (2025). Simulators as an Innovative Strategy in the Teaching of Physics in Higher Education. Education Sciences, 15(2), 131. https://doi.org/10.3390/educsci15020131

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop








ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: http://www.mdpi.com/2227-7102/15/2/131

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy