tmpD1BE TMP
tmpD1BE TMP
tmpD1BE TMP
Abstract
Recent publications about humans controlling dynamic
systems have emphasized the role of specific rules or
exemplar knowledge. Although it has been shown that
small systems can be controlled with these types of
knowledge, there is evidence that general knowledge about
the structure of a system plays an important role, too,
particularly when dealing with systems of higher
complexity. However, teaching structural knowledge has
often failed the expected positive effect. The present work
investigates details of acquisition and use of structural
knowledge. It is hypothesized that guiding subjects to focus
on dependencies rather than effects supports them in
applying structural knowledge, especially when the
application is practiced in a strategy training. An
experiment with N=95 subjects supported the hypothesis of
the usefulness of the dependency perspective, but revealed
an adverse effect of the strategy training. Differences
between subgroups studying different majors have been
found that give rise to questions about the relation between
prior knowledge and instruction. The results have
interesting implications for models of how structural
knowledge is represented as well as for methods of
teaching system control efficiently.
Experiment
The system I used in this experiment is a simulation of the
influences of three fictitious medicines onto the levels of
three fictitious peptides in the blood. The medicines are
called MedA, MedB, and MedC; the peptides are called
Muron, Fontin, and Sugon. The effects of the substances
onto each other are simulated with the following discrete
linear equations:
(1) Muront = 0.1 Muront-1 + 2 MedAt
(2) Fontint =
Fontint-1 + 0.5 Muront-1 0.2 Sugont-1 + MedBt
(3) Sugont = 0.9 Sugont-1 + MedCt
Procedure
The experiment began with a general instruction about the
system. All subjects went through a standardized
exploration phase guided by the experimenter. The
exploration was designed to demonstrate all causal
relations between the variables of the system. Subjects
were guided to analyze the observed effects and asked to
enter them in cards provided by the experimenter. The
procedure in this phase was different for the two
knowledge conditions: In the Dep condition, the
experimenter consistently asked for dependencies, and the
cards were sorted by the dependent variables Muron,
Fontin, and Sugon. In the Eff condition, the experimenter
consistently asked for effects, and the cards were sorted
by the independent variables MedA, MedB, and MedC.
At the end of this phase, the experimenter examined the
knowledge of the subject orally, again consistently asking
either for dependencies or for effects. Subjects had to
recall all possible relations with the respective numeric
weights before moving on to the next phase (all subjects
achieved that).
Subjects in the no strategy training condition could
then explore the system for one round (six simulated
hours) on their own. Subjects in the strategy training
condition went through a number of exercises where they
practiced a method of predicting future states of the
system. As mentioned above, this was the first part of a
strategy tested earlier in a cognitive model. Only a part of
the complete strategy was selected to keep the training
short. Nevertheless, all effects (condition Eff) or dependencies (condition Dep) were needed and rehearsed in
these exercises.
Next, all subjects were given the control problems. All
problems comprised six simulated hours and were given
with the objective that the goal states had to be reached as
soon as possible, and to be maintained. Table 1 shows the
initial states and the goal states for the four control
problems. Initially, all variables except Fontin were zero.
In order for the subjects to familiarize themselves with the
control task, they were given two rounds for Problem 1.
Table 1: The four control problems given to the subjects
Problem 1: Fontin = 50
Problem 2: Fontin = 900
Problem 3: Fontin = 2000
Problem 4: Fontin = 50
Results
To measure control performance, the solution error was
calculated by summing the natural logs of the absolute
differences between the goal values and the actual values
Strategy
training
Eff
Dep
yes
2.9 (1.5)
n = 24
2.4 (1.5)
n = 26
2.6 (1.5)
n = 50
no
2.4 (1.5)
n = 22
1.7 (0.9)
n = 23
2.0 (1.3)
n = 45
2.6 (1.5)
n = 46
2.1 (1.3)
n = 49
2.4 (1.4)
n = 95
4,5
4
3,5
Solution Error
3
2,5
Eff
Dep
2
1,5
1
0,5
0
Arts/Hum
Econ/Law
Science
Discussion
Acknowledgments
The research reported here was supported by the
University of Bayreuth. I would like to thank Lucie
Necasova, Tereza Kvetnova, and Nha-Yong Au for
carrying out the experiment.
References
Anderson J.R. (1993). Rules of the mind. Hillsdale, NJ:
Lawrence Erlbaum Associates.
Baker, A.G., Murphy, A., & Valle-Tourangeau, F.
(1996). Associative and Normative Models of Causal
Induction: Reacting to Versus Understanding Cause. In
D. R. Shanks, K. J. Holyoak, & D. L. Medin (Eds.),
The Psychology of Learning and Motivation 34 (pp. 3 46). San Diego: Academic Press.
Berry D.C., & Broadbent D.E. (1984). On the relationship
between task performance and associated verbalisable
knowledge. The Quarterly Journal of Experimental
Psychology, 36A, 209 - 231.
Boucher, L., & Dienes, Z. (2003). Two ways of learning
associations. Cognitive Science, 27, 807-842.
Catrambone R. (1998). The subgoal learning model:
Creating better examples so that students can solve
novel problems. Journal of Experimental Psychology:
General, 127, 355 - 376.
Dienes Z., & Fahey R. (1995). Role of specific instances
in controlling a dynamic system. Journal of
Experimental Psychology: Learning, Memory, and
Cognition, 21, 848 - 862.
Fum, D., & Stocco, A. (2003). Outcome evaluation and
procedural knowledge in implicit learning. In R.
Alterman & D. Kirsh (Eds.), Proceedings of the 25th
Annual Conference of the Cognitive Science Society.
Funke, J. (1993). Microworlds based on linear equation
systems: A new approach to complex problem solving
and experimental results. In G. Strube & K.-F. Wender