Who Is An Expert For Foresight

Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Technological Forecasting & Social Change 154 (2020) 119982

Contents lists available at ScienceDirect

Technological Forecasting & Social Change


journal homepage: www.elsevier.com/locate/techfore

Who is an expert for foresight? A review of identification methods T


a b,⁎ c
Stefanie Mauksch , Heiko A. von der Gracht , Theodore J. Gordon
a
University of Leipzig, Institute of Anthropology, Germany
b
Steinbeis University, School of International Business and Entrepreneurship, Germany
c
Co-founder and Board Member, the Millennium Project

A R T I C LE I N FO A B S T R A C T

Keywords: While decision research tends to treat the question of expertise with suspicion, the capacity of experts within the
Expert identification wider, applied field of foresight often remains unquestioned. In this review of expert identification methods, we
Expertise share the positive assessment of expert judgment for exploring plausible conditions of the future. However, given
Foresight the contested status of expertise, it is crucial to know how a particular mode of recruitment, or (more often) a
Judgment
combination of diverse methods, seeks to secure the expert status of a person. Our review is motivated by the
Forecasting
Literature review
conviction that foresight researchers should not only assume value in expertise, but defend this value from
Delphi method scientific viewpoints. Based on an introduction to sociological, behavioral and cognitive perspectives on ex-
Scenarios pertise, we trace the epistemological premises guiding different modes of selection. We list eight expert iden-
Future tification methods and explore their core assumptions, strengths, weaknesses and domain examples. Developing
Knowledge such linkages between priorities in expert selection and the arguments underlying them is the major contribution
of this article. A second contribution lies in providing an overview of expert selection methods, ranging from
simple and low-cost to more complex and combined methodologies.

1. Introduction given the contested status of expertise in decision research, it is crucial


to know how a particular mode of identification, or (more often) a
In recent decades, it has become routine for foresight researchers to combination of diverse methods, seeks to secure the expert status of a
report on the methods used to identify experts. Usually hidden from person. Systematic processes help to ensure the credibility and trust-
view, however, are the assumptions guiding the rationales of selection. worthiness of the results (Keeney et al., 2006) and correspond to a re-
While researchers have investigated themes such as consensus/dis- search-ethical commitment to selection equity, transparency and fair-
agreement or homogeneity/heterogeneity between experts (e.g. ness (Devaney and Henchion, 2018).
Karlsen, 2014; Meijering et al., 2013), changes in opinions during Our review is grounded in two observations. First, while a vast array
foresight processes (e.g. Makkonen et al., 2016), or differences between of judgment-based studies treat the question of expertise with care and
experts’/novices’ cognitive work in foresight exercises (e.g. suspicion (Klein et al., 2017), the status of experts within the wider and
Honda et al., 2017), the motivations behind choices in the individual more “applied” field of foresight often relies on a notion of self-evi-
selection of experts remain underexplored. We learn how foresight re- dence. As several authors note, there is a paucity of literature devel-
searchers selected experts, but not why they selected them. Confirming oping scientific motivations for the usage of experts and expertise in
this caveat, Devaney and Henchion (2018) argue that foresight (Baker et al., 2006; Devaney and Henchion, 2018). Second,
while there have been attempts in the past to present possible identi-
“a distinct gap exists in terms of appropriate and broadly applicable
fication methods in a consistent manner (Gordon, 2009; Shanteau et al.,
expert sampling and selection strategies […] the literature tends to
2002), these reviews are valuable but also dated and incomplete; they
focus on issues of consensus and opinion change amongst experts
do not (yet) embrace recently developed methods such as expert finder
rather than debating who the experts should be, or who should
systems (see Section 4.3.). While we share the positive assessment of
participate, in the first instance” (p. 46).
expert judgment for exploring plausible conditions of the future
Obviously, recruiting experts is only one task of the multi-step (Gabriel, 2013), we think it is crucial to not only assume value in ex-
procedures typically involved in foresight exercises, and the procedures pertise, but to defend this value from scientific viewpoints.
themselves vary considerably (Hasson and Keeney, 2011). However, We start this review with a selective introduction to research on


Corresponding author.
E-mail addresses: stefanie.mauksch@uni-leipzig.de (S. Mauksch), vondergracht@steinbeis-sibe.de (H.A. von der Gracht), tedjgordon@gmail.com (T.J. Gordon).

https://doi.org/10.1016/j.techfore.2020.119982
Received 10 February 2019; Received in revised form 21 February 2020; Accepted 26 February 2020
0040-1625/ © 2020 Elsevier Inc. All rights reserved.
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

expertise, dividing this literature into predominant strands. Such an 2.1. The sociological view
overview establishes the foundation for understanding the epistemo-
logical premises that guide particular modes of selection within futures Sociology tends to categorize expertise not as something people
studies. For instance, methods matching experts with task requirements “have”, but as a relational construct emerging from specific contexts of
inherently emphasize an insight gained from cognitive psychology: interpersonal encounter, such as a social network linking agents and
expertise is domain-and-task-specific (Thomas and Lawrence, 2018). institutional arrangements (Eyal, 2013). This view highlights the pro-
This brief example indicates that each method of selection inevitably cesses by which a social collective decides upon who is allowed to play
adopts a certain disciplinary stance, or multiple stances towards ex- the role of “expert”. Accordingly, we should
pertise that often remain(s) concealed. Developing such linkages be-
“[…] think of experts less in terms of their possessing some parti-
tween priorities in expert selection and the discipline-specific argu-
cular rare cognitive competency, or a greater quantity of ‘true’
ments underlying them is the first contribution of this article. A second
knowledge than their colleagues, than as having been selected by a
contribution lies in providing an overview of expert selection methods,
constituency willing to attribute expertise to them”
ranging from simple and low-cost to more complex and combined
(Agnew et al., 1994).
methodologies.
A few guiding principles employed for selecting experts for foresight
(such as social acclamation, political influence and personal involve-
2. Three disciplinary perspectives on expertise ment, see Sections 4.1.2–4.1.4.) rely on sociological arguments – the
socio-political status of a person – rather than granting value to ex-
Scholars widely agree with the initial formulation of an expert as pertise as a cognitive or performance-related property. Some foresight
someone who is skilful and well-informed in some special field researchers, for instance, stress an individual's ability to influence po-
(Ericsson, 2006b). Despite this general consensus, the debate around licies as an important criterion for defining an expert (Baker et al.,
expertise is highly fragmented, which is due to various complexities 2006). Researchers may select representatives of particular subgroups
inherent to the phenomenon (see Klein et al., 2017, for an overview). A to enable mutual social learning that allows for the integration of social,
plethora of reviews on experts and expertise suggest that the question of scientific, cultural, environmental, economic, political and technolo-
“what is an expert?” has multiple answers (Baker et al., 2006; gical viewpoints (Devaney and Henchion, 2018). The relational view
Ericsson, 2006b; Keeney et al., 2001; Shanteau, 2015). Bedard (1989) inherent to sociological angles adheres to a notion of foresight as a
once defined expertise as an “elusive concept” that is not well under- holistic, democratic and inclusive effort that integrates experts as in-
stood and consequently, difficult to operationalize. Assessments range dividuals who speak on behalf of, and ideally in the interest of, parti-
from a rather naïve appreciation of experts in managerial and political cular social groups.
strategy-making, to a full denial of expertise as a meaningful element in To note, selecting people who maintain a certain social status, re-
producing insights into uncertain future states (Armstrong, 1980). Even putation and representative role as experts involves the risk of opting
if we hesitate going as far as denying the value of expertise in foresight for individuals who are not necessarily experts but convince through
altogether, we have to admit considerable disagreement about how “outward signs of extreme self-confidence” (Shanteau, 1988, p. 211).
experts contribute to our knowledge of the future. In order to forge a Critical views highlight imbalances of power and class, by which some
path into this jungle of perspectives, we provide a very brief, and ul- people attain expert status not (only) through excellence or democratic
timately simplistic, overview, differentiating between (1) the socio- election, but because they have been born and raised into a privileged
logical view, (2) the behavioral view and (3) the cognitive view. educational context (Ericsson, 2006b). However, foresight researchers
Before mapping these three epistemologies, it is crucial to note a often embrace sociological views for their potential to move beyond
few (soft rather than sharp) distinctions between forecasting and fore- particular industry perspectives, exploring holistic change towards the
sight. While these two fields overlap considerably, they differ in terms better, such as in the fields of public health (Adler and Ziglio, 1996),
of the general directions they take, the aims they follow, the methods water use and management (de Loe, 1995; Needham and de Loë, 1990;
they apply and thus also in terms of preferred modes of selecting ex- Sutterlüty et al., 2017), or the bioeconomy (Devaney and
perts. Forecasting, in most scholars’ view, refers to estimating the Henchion, 2018).
“unknown,” i.e. short-, medium- or long-term futures in specific re-
search areas (Cuhls, 2003). Research questions are set in advance and 2.2. The behavioral view
the overall aim is to create accurate probability statements about pos-
sible futures (Martin, 2010). Foresight, on the other hand, is funda- Decision researchers outline the problem of expertise mainly by
mentally oriented towards drawing conclusions for the present and asking how human beings make choices in situations of uncertainty.
taking an active role in shaping the future (Cuhls, 2003). Foresight Based on Einhorn's (1974) paradigm, they are less concerned with the
diverts from probabilistic forecasting by promoting an idea of the future ways expertise is learned, or the social roles experts take, but with how
as something multiply envisioned, shaped and influenced by strategic to reach a good decision. Behavioral studies often engage in compara-
planning and intervention. In short, while forecasting is the task of tive analysis, for instance by asking to what degree experts’ past per-
making confident probabilistic statements, foresight involves the re- formance serves as a guide to predict the accuracy of future assessments
cognition that today's choices impact or even create the future (see Section 4.1.7.) or how self-assessments relate to the actual per-
(Martin, 2010). Foresight researchers therefore tend to adopt what we formance of experts (see Section 4.1.6.). Despite a few positive results,
depict as the sociological view in order to create an inclusive and wide- decision research is generally suspicious of the value of expertise and
ranging process of “constructing” the future (Martin, 2010). More frequently points out humans’ limited ability to make judgments under
classical statistics-based forecasting exercises often embrace a beha- uncertainty (Einhorn and Hogarth, 1978; Green and Armstrong, 2007a;
vioral view in order to study and improve the making of judgments Tversky and Kahneman, 1974). Behavioral researchers realized that
under uncertainty. Both fields share a belief in the potential improve- experts tend to be overconfident and pay insufficient regard to dis-
ment of experts’ forecasting/foresight skills, thus attending to what we tributional information (Almandoz and Tilcsik, 2016; Harvey, 1997;
call a cognitive view on the psychological operations that underlie ex- Tichy, 2004). Overconfidence can result from a perception that the
perts’ future-oriented sensemaking. In the following sections, we in- occurrence of an event could be controlled (Wright and Ayton, 1992) or
troduce these three different viewpoints on the appropriate selection of from an overvaluation of past forecasting success (Hilary and
experts and outline arguments for and against their application to Menzly, 2006). Overconfidence can also lead experts to ignore or dis-
foresight studies. count valuable information that is in conflict with their own initial

2
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

opinion (Green and Armstrong, 2007a; Yaniv and Kleinberger, 2000). not dichotomous, but developmental or evolving. Educational Psy-
Other scholars examined the level of consensus among subjects (also chologists of the 1970s abandoned the study of limitations of human
called between-expert reliability), the stability of judgment over time decision-making in laboratory contexts (often involving college stu-
and the degree to which individuals reflect on their own judgment dents), shifting their attention to the complexity of the experts’ actual
processes (Bedard, 1989; Einhorn, 1974). For all these variables, deci- contexts of work (Hoffman and Lintern, 2006). Different from both
sion researchers have observed a detrimental effect of expertise on sociological and behavioral views, cognitive studies developed interest
performance. In consequence of these observations, decision re- in processes inherent to the attainment of expertise and experts’ heur-
searchers increasingly turn to statistical models for predictions and istic strategies of coping with complex issues.
deem them to be superior in various domains (Armstrong, 1985; One persistent theme that emerges from this literature is that in-
Atanasov et al., 2016; Tetlock, 2017 [2005]). Scholars found that the dividual expertise requires extensive and intensive domain-specific
accuracy of averaging often exceeds that of the individual perceived as training and practice (Bereiter and Scardamalia, 1993;
the most knowledgeable in the group (Tetlock, 2017 [2005]). From a Ericsson, 2006b). In a cross-domain overview, Ericsson and
probabilistic perspective, averaging methods tend to outperform ex- Lehmann (1996) speak of ten years of deliberate daily practice. Experts
perts because they counterbalance a long list of social-influence pro- “think” differently, because they rely on deeper knowledge in a domain,
cesses and judgment biases that undermine accuracy (Kahneman and they organize their knowledge more by category, have a finer cate-
Lovallo, 1993). Prediction markets, for instance, have been found to be gorization and possess recall superiority (Bedard and Chi, 1992;
relatively accurate in forecasting short-term events (e.g. presidential Bereiter and Scardamalia, 1993; Kuchinke, 1997). Experts also engage
elections or passage of CO2 cap; Borison and Hamm, 2010; in “case-based reasoning” (Kolodner, 1983), i.e. they use their memory
Sjöberg, 2009) and groups of experts outperform the single expert in of past events to organize vast amounts of information. Expert knowl-
longer term forecasts (Hanea et al., 2018; Parente and Anderson- edge is thus structured in a way that renders it more accessible, func-
Parente, 2011). tional and efficient. Experts draw from an extensively rich network of
In the field of foresight, scholars equally acknowledge biases in connections between concepts, whereas novices tend to base their or-
expert judgment. Hussler et al. (2011) and Ecken et al., 2011 have ganization on surface features of the information presented
shown that experts can be subject to self-serving and optimistic biases (Kuchinke, 1997). As compared to novices, experts also possess better
when judging longer term developments. Dorr (2017) elaborates on problem identification skills, modifying, for instance, an ill-defined
common errors in reasoning about the future and illustrates how in- problem to a well-defined one (novices here tend to start immediately
tuitive judgments about evolving futures may often fail due to typical to solve the problem). Experts and novices also share a few cognitive
fallacies in reasoning. Nevertheless, foresight studies generally assume strategies, such as means-ends analysis, setting sub-goals, generate-and-
positive value in expertise and try to overcome these different sets of test and analogical reasoning (Bedard and Chi, 1992). In short, rather
biases through standardized procedures that potentially improve the than using radically different strategies, experts outperform novices
quality of research. For instance, Winkler and Moser (2016) made because they apply these strategies to a well-organized knowledge base
suggestions as to overcome biases of framing, anchoring, desirability (Hoffman, 1998; Kuchinke, 1997).
bias, bandwagon effect/groupthink as well as belief perseverance, and As indicated, one of the core assumptions of cognitive approaches is
Belton et al. (2019) work on counterbalancing what they call group that expertise – different from competence – is domain-specific and not
bias, researcher bias and opinion bias. Moreover, transferable (Herling and Provo, 2000; Shanteau, 2015). There are
Bonaccorsi et al. (2020) just recently recommended three different quite a few domains where “experts” do not perform better than less
mitigation strategies to become a standard repertoire of foresight re- trained individuals. They improve when they start, up until an accep-
searchers: diversity of panels (even inclusion of non-experts), negation table level, and then they no longer advance any further (number of
by taking opposing views systematically into account and abstraction years is thus a poor predictive variable in some fields, as we will see)
by reasoning in frameworks and graphical functions. (Ericsson, 2006b). In their extensive comparisons, Shanteau and col-
The next section shifts over from the behavioral to the cognitive leagues found that less structured domains (stockbrokers are a notor-
view, thus also moving from literature studying the “input and output” ious example here) lead to less consensus (Shanteau, 2015;
of the judgment process to research more interested in the “inside of the Shanteau et al., 2002). The heavier the reliance on human behavior
‘black box’, the understanding of the expert's memory and decision (without technical “aids”), the more disagreement one will find
process” (Bedard, 1989, p. 127). (Shanteau et al., 2002). However, divergent from decision research,
Shanteau (1992, 2015) argues that whether consensus is desirable at
2.3. The cognitive view all, heavily depends on the context of research. Decision researchers
develop their conclusion that experts are flawed and prone to make
Pioneers in cognitive psychology like Sir Francis Galton initially simple errors from the assumption that consensus and inter-judgment
assumed expertise as a set of innate mental capacities or “hereditary reliability are desirable courses (Shanteau, 2015). However, if foresight
genius” (Ericsson, 2006b) – a perspective later questioned by an evol- projects embrace a multiplicity of options and perspectives rather than
ving nature/nurture debate in the study of expertise (Hambrick et al., a single (often numerical) answer, that which is often interpreted as
2017). The tacit assumption that experts somehow have “greater “bias” may actually point to a strength rather than a weakness of ex-
minds” and represent “truly exceptional people” lies behind what perts. There may even be situations in which “biased” participants are
Chi (2006) calls the absolute approach to expertise. This assumption of a needed, as when a study involves policy options and the debate leading
genetically determined superiority of particular people guides some of to the study has entrenched and opposing factions. In critique of the
our everyday understanding of experts. In the context of cognitive behavioral approach outlined above, cognitive psychologists ascribe the
science, however, it is much more common to take a relative or com- lack of consensus between experts to the divergent experiences they
parative approach that views expertise as a level of proficiency that a have gained in their respective fields (Bedard, 1989). Thus, Shanteau
novice can achieve (Chi, 2006; Ericsson and Lehmann, 1996). Re- and others demand a shift from the single-dimensional view on ex-
searchers of cognition thus generally perceive expertise as something pertise toward a (here: foresight-) task-sensitive perspective on expert
that can be learned and taught (Ericsson et al., 2007). Diverting from performance (Dufva and Ahlqvist, 2015; Shanteau, 2015). Scholars
the absolute view, the majority of scholars hold that the same learning should not generally view experts as “all-knowing single answer deci-
mechanisms that account for the acquisition of everyday skills can be sion makers” (Shanteau, 2015, p. 172), but as knowledgeable con-
extended to the acquisition of high-level skills and expertise sultants who disagree because they see alternative paths. The following
(Ericsson, 2006b). The difference between experts and novices is thus section therefore seeks to characterize some of the particularities of the

3
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

“foresight expert.” competences such as short-term memory (Chase and Ericsson, 1981)
and their overall performance (Burgman et al., 2011; Cooke, 1991).
3. Expertise for foresight Thus, if the study design allows, the training of foresight-related skills
in participants potentially increases the quality of results (Green and
As initially stated, studies in the field of forecasting tend to em- Armstrong, 2007a). The idea to create expertise inherently, attends to
phasize single correct answers and numerical outcomes. Quantitative the importance of task characteristics and task familiarity highlighted
methods, such as time series analysis and trend extrapolation, submit to by cognitive psychology. Bolger and Wright (2017), for instance, sug-
the view that the future is at least partly determined by the past gested that training helps assure that differently specialized experts
(Poli, 2011). The second family of (semi-) qualitative foresight studies employ the same terminology and similar concepts. Similarly, the
(Delphi studies, scenarios, etc.), however, diverts from this view by Foresight Competency Model by Hines et al. (2017) views individual
following the assumption that “the future can better be confronted by (again, industry- and task-specific) foresight as an “innate cognitive
opening our minds and learning to consider different viewpoints” ability that can be developed,” so that one may reasonably think of “the
(Poli, 2011, p. 403). As Klein et al., 2017 summarize in their recently- futurist” as a profession whose tools can be learned and improved
published Why Expertise Matters: (Hines et al., 2017, p. 4). However, it is important to keep in mind that
competence trainings support the development of process expertise
“Our society depends on experts for mission-critical, complex tech-
rather than domain expertise. In other words, while expertise in fore-
nical guidance for high-stakes decision making because they can
sight performance can be improved, there are no short cuts to acquiring
make decisions despite incomplete, incorrect, and contradictory
deep knowledge and experience in most domains (Ericsson et al., 2007).
information when established routines no longer apply. Experts are
the people the team turns to when faced with difficult tasks”
4. Identification methods
(Klein et al., 2017, p. 67).
Expertise in foresight means to envision and to judge subjectively, We now examine a range of expert selection methods that have been
often across disciplinary boundaries, when precise techniques to deal used in future-oriented studies (see Table 1). In order to provide
with complex problems are lacking (Linstone and Turoff, 2002). Fore- practical orientation, we will list a short description and a summary of
sight practice thus heavily depends on in-depth thinking, evaluation its strengths and weaknesses for each single approach, indicating some
and vision of individuals indicating future directions (Devaney and of the (im)practicalities of working with a particular method (see
Henchion, 2018). The value of this kind of future-oriented thinking Table 1). In order to provide orientation on the selection of methods in
depends on “the mental processes – both rational and intuitive – used to certain contexts, we add an additional column to the Table 1 that lists
develop images of the future as a form of cognitive intelligence” examples of task domains (application) including examples of foresight
(Hines et al., 2017, p. 4). Futurists, according to Hines et al. (2017), studies using the respective approach. The final column presents the
should possess core competencies of framing, scanning, futuring, de- disciplinary orientation(s) we presume to be at the core of each ap-
signing, visioning and adapting, along with a range of second order proach, based on the three angles introduced so far (sociological vs.
competencies specific to the task environments, professional roles and behavioral vs. cognitive). Such allocation of selection methods to a
personal attributes. In foresight, even more than in other types of task research tradition raises readers’ awareness to discipline-specific argu-
environments, the challenge for an expert is to draw sense from mul- ments behind individual tools. Subsequent to our listing of “the in-
tiple, constantly changing, and dynamic factors and to identify various formal approach” (Section 4.1.1.) and eight “formal” expert identifi-
alternate paths rather than a single solution (Shanteau, 2015). Instead cation methods (Sections 4.1.2. to 4.1.9.), we will explore two
of using the status quo as the basis for prognosis (as in forecasting), additional themes, namely combined approaches (that integrate mul-
expertise in foresight also often means challenging taken-for-granted tiple methods, Section 4.2.) and approaches that locate their experts
wisdom. As Dufva and Ahlqvist (2015) note in their examination of within pools, platforms or other databases (Section 4.3.). Far from
types of knowledge relevant for foresight, “[a] common motivation […] being popular techniques in the field of foresight at the moment, these
is to broaden the horizon on what is deemed to be relevant or possible additional themes indicate future directions and embrace more complex
in the present by challenging widely shared positions and existing measures of expertise. While we occasionally point towards rationales
worldviews” (p. 252). of panel selection, a detailed presentation of approaches suitable for
There are other issues with decision research's partial denial of the composing panels would venture beyond the scope of this paper. We are
value of expertise. At a more philosophical level, the future is not thus primarily interested in how to identify individual experts.
simply an exterior reality “happening” to us as passive recipients, but at
least partially a result of purposeful action. Diverse foresight methods 4.1. Individual selection methods
are thus directed not towards achieving certainty, but towards de-
termining consensus on the best policy under given circumstances 4.1.1. Informal approaches
(Baker et al., 2006) – feasible in areas where there is little knowledge or At one end of the extremes of expert selection lies what we call
certainty surrounding the issue at stake. Foresight often produces vi- “informal approaches,” since they opt against formalizing expertise or
sions and strategies rather than accurate forecasts (Cuhls, 2003). In avoid reporting explicit selection criteria. Here, we refer to selection
short, we depend on experts because of their competences in structuring processes that follow intuitive considerations rather than analytical
complex information and drawing conclusions from these, but also have principles, such as going through researchers’ chains of contacts
to consider the limitations of human judgment processes. It thus seems (Renzi and Freitas, 2015) or using stakeholders already involved in the
wise to adopt Shanteau's (1992) balanced view that “experts are neither respective project (Miles et al., 2016). The attribution of expertness to
as deficient as suggested in the decision making literature nor as unique some individuals here follows “common sense and practical logistics”
as implied by the cognitive science perspective” (p. 256). (Keeney et al., 2006, p. 208). While most of these examples do not
In concluding this section, it is important to note that foresight may provide any argumentations for why they pursued in the way they did
not simply rely on expertise but create expertise during the process (which is why we call them informal), informal approaches to expert
(Shanteau et al., 2002). Practice for gaining experience might be trig- selection inherently relate to the requirement of purposive (rather than
gered by observations (e.g. training videos), reflections (e.g. case stu- representative) sampling. Belton et al. (2019), for instance, argue that
dies) and discussion sessions (e.g. assessment center). It has been there is no “magic formula” for expert selection since foresight studies
proven that if experts are sensitized to their biases and trained to reduce tend to rely on convenience samples rather than to statistically reflect
them through systematic advice-taking, they can considerably improve the opinions of certain sets of people (see also de Loë,

4
Table 1
Overview of expert identification methods.
Method Definition/explanation Pro Contra Examples of task domains (application) Disciplinary
approach
S. Mauksch, et al.

Social Acclamation Peer Nomination • Field experts are good assessors of other • Social desirability bias or popularity • Advisory committees (Cooke, 1991) Sociological
domain-specific experts effect, i.e. acclaimed expertise correlates • Technology foresight (Nedeva et al., 1996)
• “Democratic” and holistic approach with popularity of a person • Logistics (von der Gracht, 2008)
• Co-nominated experts outperformed laymen • Priority-setting for future research policies (Cuhls and
in some studies Georghiou, 2004)
• Peer assessments correlate with external cues • Sustainability initiatives (Sutterlüty et al., 2017)
• Inclusion of unknown experts
• Drop-outs are less likely in nomination
procedures

Political Influence Selecting experts with potential • Easily accessible information • Politically powerful individuals are not • Government-led foresight (Calof and Smith, 2010) Sociological
political impact • Foresight projects may achieve positive always experts • Adaptive policy-making (van der Pas, Kwakkel, and Van
social/environmental change Wee, 2012)
• Policymaking to address public issues and citizen
participation (Wagner et al., 2016)
• Future-oriented technology analysis (Andersen et al.,
2017)

Personal Selection on the basis of personal • Deliberate practice as an important element • Requires operationalization/ • Inclusive bioeconomy development (Devaney and Sociological/
Involvement interest in the subject of expertise measurement Henchion, 2018) Cognitive
• Higher response rate • Self-selection bias • Urban mobility (Spickermann et al., 2014)
• More inclusive expert selection • Materialistic personal interests may be
involved

5
External Cues Assessment based on externally • Easily accessible information • Risk of hidden biases in the research • Auditing (Abdolmohammadi and Shanteau, 1992; Sociological /
available criteria, e.g. years on the • External reflection of skills team's decisions of who counts as an Tiberius and Hirth, 2019)• Financial forecasts Cognitive
job, job position, certification, • Experience often correlates with improved expert (Hommel et al., 2019; Jacob et al., 1999; Mikhail et al.,
publications etc. cognitive skills and deeper knowledge • More experience does not always mean 1997)
more expertise • Clinical diagnosis and health (Baker et al., 2006;
• Professionals move always up, but Graham et al., 2003)• Sustainable tourism (Miller, 2001)
seldom down; some professionals never • Nuclear energy (Hussler et al., 2011)• Supply chain
become experts management (Kembro et al., 2017)
• Not applicable to domains lacking
institutionalized criteria of expertise
• Good theorists are not necessarily good
practitioners (domain expertise vs.
process expertise)
• Neglects creativity and imagination

Self-Ratings Self-assessment of expertise • Low-barrier method for fields that lack • ambivalent results from decision • Self-assessment of physicians (Davis et al., 2006) Behavioral /
objective criteria of expertise research • Technological Roadmapping/Foresight (Chakravarti et al., Cognitive
• Experts are theoretically the best judges of • Biases: overoptimism, overconfidence 1998; Förster, 2015; Shin, 1998)
their performance; experts excel strong self-
monitoring skills
• Self-rated experts outperform self-rated non-
experts

Past Performance Selection based on past performance • Low-barrier, low cost method if past • Depends on measurable outcomes • Financial analysis (Jacob et al., 1999; Sinha et al., 1997) Behavioral
performance data is available • Negative evidence for the forecasting of • Tourism forecasts (Croce et al., 2016)
• Well-established “gold standards” and rating extreme or rare events
systems exist in some domains (e.g. in several • Biases: overconfidence after success
scientific disciplines) • Fails to acknowledge non-quantifiable,

(continued on next page)


Technological Forecasting & Social Change 154 (2020) 119982
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

Melnychuk, Murray, and Plummer, 2016; Devaney and


Henchion, 2018). While this is true, there is still a danger to disregard
the question of expertise altogether. When selecting people based on a
Disciplinary

Sociological
Cognitive /
Cognitive
approach

vague impression of their knowledgeability, researchers undermine the


recognition that knowledge is only one of several dimensions of ex-
pertise (Ericsson et al., 2007). Bolger and Wright (2017) contend that
social expertise – the consensus within a particular group (such as the

• Predicting job performance (Dakin and Armstrong, 1989)

• Situational judgment/tacit knowledge tests for military


team of researchers) that someone has expertise – is a poor proxy for

• Spatial expertise of gamers (Sims and Mayer, 2002);

• R&D and anticipatory intelligence (Matheny, 2016)


true expertise in the cognitive sense. An informal approach is also

• Auditing (Abdolmohammadi and Shanteau, 1992)


questionable from the perspective of authors who argue that experts
personally known to the researcher are exactly those who should not be
invited, as such procedure may create various kinds of biases
Examples of task domains (application)

• Life sciences (Burgman et al., 2011)

(Rowe et al., 1991; Van Zolingen and Klaassen, 2003).

leaders (Hedlund et al., 2003)


4.1.2. Social acclamation
• Auditing (Marchant, 1990)

The first approach that falls under the rubric of sociological views is
social acclamation. Social acclamation refers to attempts of identifying
experts through nomination of peers, thereby building on the assump-
tion that agreed-upon experts demonstrate improved performance over
laymen (Phelps and Shanteau, 1978). Social acclamation comes in
different varieties also known as co-nomination, snowballing, the re-
putational approach (Miles et al., 2016), or consensual acclamation
(Shanteau, 1993). The search process usually begins with the research
• Important, but not sufficient: knowledge

Participants are tested and then excluded

• Not applicable if experts are confronted

team identifying prospected participants, who then nominate further


• Neglects experience-related heuristics
non-observable aspects about experts

key people. The initial aim is to generate a long list of candidates to be


• Neglects creativity and imagination

• Neglects creativity and imagination


• Some fields lack knowledge about
• Ethical issues/waste of resources:
elicitation is more important than

• Involves sophisticated and time-

cut down to a shorter list of primary nominees and alternates


• High demand for preparation

(Loveridge, 2004; Miles et al., 2016).


cognitive processes in experts

Such an approach may broaden the knowledge base and integrate


previously unknown individuals and fresh knowledge into the foresight
(e.g. tacit knowledge)

intensive procedures

process, thus representing a holistic and inclusive approach. Renowned


knowledge alone

scholars argued in favor of social acclamation approaches, because,


with new tasks

when given a specific context, experts excel in judging each other's


levels of expertise (Kahneman and Klein, 2009; McDonald, 2001). No-
Contra

minated candidates also show less likeliness for dropout of multiple


round procedures such as Delphi, especially if the study officially lists
them as participants (again a “sociological” effect; McDonald, 2001).
• Applicable to repetitive tasks with evaluable

• Identification of “true” domain experts who

Another positive aspect of social acclamation is that peer assessments


• Allows for sub-selection within a group

meet important cognitive criteria (inner


consistency, discrimination ability, the

often correlate with external cues such as publications and qualifica-


tions (Burgman et al., 2011). However, these correlations cannot be
interpreted as predictors of success in the actual performance of experts
(Burgman et al., 2011), because social desirability and the perceived
drawing of analogies etc.)
• Answers are verifiable

popularity of a person have effects on both (Shanteau et al., 2002).

4.1.3. Political influence


A few foresight studies in the field of policy-making suggest se-
outcomes

lecting experts according to their potential political impact, thus sub-


scribing to the sociological perception of expertise as a social status
Pro

rather than an inherent characteristic of persons. While such impact


may, or may not speak for their cognition- or knowledge-related ex-
Cognitive tests assessing expertise

pertise, individuals’ political positioning has a role in translating out-


comes into governance and action. If one considers the future as being
Selection based on verifiable

shaped by individual and collective goal-oriented activities, “foresight


Definition/explanation

impact” (Calof and Smith, 2010) represents an important criterion to


consider when assembling panel members or selecting individual ex-
perts. As Calof and Smith (2010) remind us, foresight is also a socio-
political activity that ideally achieves learning effects on behalf of the
knowledge

people involved, and supports the strategy formulation of system ac-


tors. Hinting at this notion of real world influence, Turoff (1970) in his
seminal article on the policy Delphi already suggested selecting in-
dividuals at a “fairly high level of responsibility” (p. 156). Interestingly,
Psychological Traits
Table 1 (continued)

consensus between experts is not being used here as a reliability mea-


Knowledge Tests

sure, but refers to an important outcome of foresight projects.


Devaney and Henchion (2018), for instance, argue for Delphi studies in
Method

the context of bioeconomic transitions that experts involved should


“reach consensus as to the most sustainable and just allocation of bio-
logical resources” (p. 46). In this view, foresight is a form of negotiation

6
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

around far-minded plans for the future, by which conflicts or different (for examples, see Engelke et al., 2016; Toma and Picioreanu, 2016),
visions of a new product, service, or strategy may be resolved. While which is why we opted for listing a few sub-approaches here: 1) ex-
this approach is clearly at odds with both cognitive and decision-or- perience/years on the job, 2) certifications and publications, and 3)
iented explorations of expertise, it accepts the conditions under which work positions.
foresight activities usually take place (in the context of policy or Experience is widely perceived as the second parameter that – to-
strategy projects) and poses as a challenge how they may achieve po- gether with knowledge – creates expertise. A “time element called ex-
sitive societal impact. On the negative side, the selection of politically perience,” Herling and Provo (2000) argue, is at the core of all models
powerful individuals may cause social desirability biases and/or in- of expertise and a very common referent in foresight studies. An in-
volve the risk of engaging people who exercise power but are not ne- dividual expert should have worked in her/his area for a considerable
cessarily domain experts. length of time (Ericsson, 2006b). The experience-based model of ex-
pertise finds support from a range of scholars, such as Day and
4.1.4. Personal involvement Lord (1992) who observed – in line with cognitive psychology – that
It seems self-evident that individuals should participate out of per- older organizational experts incorporate more problem information and
sonal concern for the issues at stake. Experienced researchers often structure this knowledge in a greater variance of categories, thus re-
indicate the critical role of individuals’ personal affect and their social lying on well-developed, context-dependent heuristics. Confirming this
encouragement to participate on the quality of the foresight process notion, Mikhail et al. (1997) and Jacob et al. (1999) found evidence
(Hunt, 2006). We view personal involvement as both sociological and that analysts improve their forecast accuracy as they gain company-
cognitive criteria, because 1) these persons may have a professional specific experience. Scholars generally agree that expertise positively
“mandate” to influence certain fields such as the bioeconomy or urban correlates with years on the job (Richman et al., 1996), which is an
mobility, and 2) they may simultaneously be emotionally involved and easily accessible factual information that is well documented in a
deeply concerned about the issues at stake. As Kuchinke (1997) high- variety of domains. The examination of experience may also be in-
lighted, part of being an expert is high commitment to the domain of tensified by inviting potential experts to report further details, e.g. the
expertise, which is in line with the general emphasis on deliberate approximate hours spent on specific tasks or within particular sub-do-
practice in expert learning (Ericsson et al., 1993). The desire to receive mains (Hoffman and Lintern, 2006). Yet, it is important to keep in mind
the results of a study in one's professional field can sometimes be a that in some domains experience fails to indicate expertise
motivation to perform well. Experts who are highly interested in topics (Bedard, 1989), because people may exhibit a number of biases that
at stake in Delphi surveys, for instance, demonstrate a high initial re- prevent them from using information that they have gained from many
sponse rate and a low tendency to drop out (Hasson et al., 2000). years of working in a particular area.
Needham and de Loë (1990) and others who follow their approach (e.g. Another parameter among the external cues-based approaches is to
Devaney and Henchion, 2018) move beyond merely pointing to the use forms of certification or accreditation to identify experts. Such an
issue of “closeness to the topic.” Instead, they conceptualize and op- approach is inherently problematic insofar as certifiable standards
erationalize this closeness in a continuum model ranging across three rarely exist for expert domains; experts are needed precisely because
potential levels: subjective closeness (direct experience and experiential there is no “ground truth” (Abdolmohammadi and Shanteau, 1992;
knowledge), mandated closeness (professional or legal responsibility Gigerenzer et al., 2007). Registered qualifications are also not ne-
for the issue at hand) and objective closeness (familiarity with the topic cessarily consistent with expertise and ultimately exclude experts who
as a result of exploration, e.g. research) (Needham and de Loë, 1990). demonstrate knowledge in ways other than a professional qualification
On the negative side, as the study of Needham and de Loë (1990) (Baker et al., 2006). The same holds true for selection based on pub-
already indicates, it is challenging to operationalize and measure the lications (e.g. Hallowell and Gambatese, 2010; Renzi and Freitas, 2015).
commitment of study participants. It is also absolutely possible that Graham et al. (2003) and Miller (2001), for instance, defined a
experts who participate out of personal interest encounter self-selection minimum number of recently published, peer-reviewed “quality pa-
and desirability biases (Hasson et al., 2000; Rowe et al., 1991; pers” in order to identify experts for Delphi studies in the fields of
Van Zolingen and Klaassen, 2003). For example, scientists who are keen medicine and tourism. Depending on the field, such identification may
to achieve intellectual breakthroughs and to develop sophisticated re- include citation indexes and patents. While the most obvious advantage
search in a particular field – sometimes in combination with commer- here is that Google Scholar provides an easy-to-handle tool to identify
cial aspects – may overstate certain opportunities mapped in the fore- scholars with considerable knowledge and impact in their fields, in
sight process (Devaney and Henchion, 2018). While researchers have to some domains such as medicine, theorists are not necessarily good
attend to these potential biases, selecting candidates based on their practitioners.
personal or professional interest may secure the commitment to, and A similar ambivalence finds expression in the debate around work
thus also the quality of, the foresight process. positions or occupations, for example supply chain professionals
(Kembro et al., 2017) or health opinion makers (Baker et al., 2006), as
4.1.5. External cues indicators of expertise. This strategy can be misleading, since job im-
We use “external cues” as an overarching term for criteria that are provements do not necessarily lead to expert levels of achievement
externally available to a research team. Methods listed in this section (Ericsson, 2006a). Scholars also documented an absence of improve-
may also be characterized as “professionalism approaches” that refer to ment by experienced individuals in many areas, for example in the
sociological and cognitive rather than behavioral criteria (Bolger and diagnosis of heart sounds, in financial advice, or auditor evaluations
Wright, 1994). Before engaging with this mode of selection more (Ericsson, 2006a; Ericsson and Lehmann, 1996). Shanteau et al. (2002)
thoroughly, it is important to note that because the cues themselves are even claimed that there are professionals who, for several reasons,
defined by the research team, they may involve hidden biases of the never become “real” experts from a cognitive perspective, and remain
researchers themselves (Croce et al., 2016). The selection process being “stuck” at lower levels of expertise attainment. Highly-ranked
cannot be thought of as bias-free, since bias is an intrinsic human trait individuals may also fail to perform in judgment-related tasks if their
that the mind uses to simplify decision making (Kahneman, 2011), in- “epistemic expertise” (capacities) profoundly differ from their “perfor-
cluding decisions relevant to who is perceived as an “expert.” Stereo- mative/process expertise” (the successful execution of capacities)
typing, for instance, is one of these biases that refers to the unconscious (Karlsen, 2014; Miles et al., 2016; Shanteau et al., 2002). Moreover,
attribution of particular qualities to members of a certain social group selecting participants on the basis of their current jobs tends to favor
(Greenwald and Banaji, 1995). Nevertheless an external-cues based senior experts and is only applicable to knowledge-related disciplines
approach is the most popular form of identifying experts for foresight (Abdolmohammadi and Shanteau, 1992).

7
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

In their basic assumptions, professionalism approaches exclude the decision researchers reject self-ratings and recommend that differential
idea that a person might be lacking in a key skill with growing age or weighting based on anything other than performance should be avoided
extensive factual knowledge and often ignore creativity, imagination (Hanea et al., 2018).
and other competencies crucial to foresight. For instance, it has been Yet one has to keep in mind that, similar to the critique of
suggested that with growing experience creativity decreases Shanteau (2015) presented earlier in this review, the negative assess-
(Chi, 2006) – a characteristic highly relevant for foresight tasks which ment of self-ratings may also emerge in effect of particular study setups.
often need to be approached in spontaneous and flexible ways. We Ward et al. (2002) examined the methodological issues that plague the
conclude that external cues-based approaches are popular because of measurement of self-assessment abilities. For example, studies often
their easy implementation; however, they do not necessarily point to implicitly assume that every individual is equal in terms of the com-
those who are the best choice for a particular foresight exercise. petence to evaluate themselves; in other cases, researchers failed to take
into account that a few outliers spoil the self-rating ability of the whole
4.1.6. Self-ratings panel. The authors therefore suggested a reconceptualization of self-
Self-ratings refer to the simple process of asking potential panellists assessment – an intraindividual approach that focuses on individuals’
or study subjects to rate their own expertise (Mullen, 2003). Since strengths and weaknesses relative to each other rather than rating
scholars often adopt self-ratings in comparative studies that relate prior themselves relative to their peers (Ward et al., 2002). In sum, as
self-assessment to actual performance, we categorize self-ratings as a Rowe and Wright (1996) already argued, self-assessments are a prac-
behavioral approach. However, self-ratings have also been discussed by tical tool because they may be obtained prior to implementation of the
scholars of cognition who found that the capacity for self-assessment can survey procedure, rather than after it. However, if they are used in the
be trained and improved. common interindividual process (i.e. each individual evaluates his/her
Self-assessments may be used at the initial step of selecting experts own performance, see Ward et al., 2002) and/or if they are the only
(Gordon, 2009), or to rank their expertise for individual judgment tasks measure used in a study, their predictive power is questionable and
during the investigation process itself. For instance, researchers may should be treated with caution.
request from panellists to indicate a confidence level in their individual
estimates (Larreche and Moinpour, 1983) or their perception of a single 4.1.7. Past performance
question's “easiness” (Kawamoto et al., 2019). While often used in In general, performance-based approaches stress behavioral aspects,
practice, the value of self-ratings is subject to heated debates, again by including the past forecasting judgments of an expert as a retro-
between (more affirmative) cognitive scientists and (more skeptical) spective criterion of assessment (Denrell and Fang, 2010). They
decision analysts. Proponents of self-ratings argue in favor of their easy strongly relate expertise to judgment behavior, i.e. they define a con-
application and straightforward analysis, since only one person has to struct in terms of its outcomes and not so much as a characteristic that
assess or scale the expertise of oneself. As Ericsson (2006b) argued, study participants bring along. While the above parameter of experi-
experts are (theoretically) nearly always the best qualified to evaluate ence, i.e. knowledge gained from many years of working in a field,
their own performance. Pioneering studies conducted by the RAND could be a qualifier for performance in judgments, it is not a mandatory
Corporation applied self-ratings to identify more accurate or “elite” condition. Past performance more narrowly concerns performative ex-
subgroups in group-based judgment techniques (Dalkey et al., 1970; pertise, which means that individuals not only possess knowledge and
Kawamoto et al., 2017). In a similar vein, Best (1974) found that the capacities, but have already proven that they are capable of successfully
median of judgments made by self-rated experts outperformed the executing these capacities in specific situations of judgment. Past per-
median of non-self-rated experts on intellective judgment tasks (e.g. the formance approaches thus assume a potential for good decision-making
number of students enrolled in the last term at the faculty's business based on previous demonstrations (Kuchinke, 1997). In some fields
school). Other cognitive researchers contend that experts usually excel such as finance, for instance, there is evidence that subjects who are
in strong self-monitoring skills and self-knowledge: experts are more identified as superior in one period tend to have superior performance
aware when they make errors, why they fail to comprehend, and when in subsequent periods (Sinha et al., 1997). To name an example,
they need to check their solutions (Kuchinke, 1997). Furthermore, Cooke's method (1991) builds on earlier predictions for the selection
while scholars usually underline the value of external (validity) criteria and weighting of experts’ earlier forecasts that are similar to the target
to formalize or operationalize expertise (Bedard, 1989; (known as “seed questions”). The repetitive structure of performance-
Shanteau, 1993), many domains have no established objective criteria based models is crucial. As researchers highlight, the maximal adap-
for superior performance and standards themselves might be shifting tation to task constraints is essential for the performance of experts
(Bedard, 1989). In this light, self-ratings may sometimes be the only (Devine and Kozlowski, 1995; Ericsson and Lehmann, 1996;
available tool when alternative forms of assessment are inapplicable, Shanteau, 1992), which renders this method of selection particularly
for instance in radically new fields that lack institutionalized structures. applicable to repetitive tasks.
Critical contributions, published in the same decades as the RAND However, selection based on past performance should attend to the
research referenced above, in contrast, questioned the usage of self- circumstances of previous judgments, such as whether an accurate
ratings for identifying experts (see Sackman, 1974 for a review). While forecast truly evolves from a person's expertise or in effect of luck or
many scholars underline the “theoretical value” of self-assessment, irrational choice. Auditors, for instance, have been shown to make
empirical studies frequently show that self-declared experts fail to objectively good decisions, but for the wrong reason (Bedard, 1989).
perform (for a review, see Ward et al., 2002). The studies of Similarly, Denrell and Fang (2010), found that accurate forecasts of
Tichy (2004) and Green and Armstrong (2007a), for instance, indicate extreme events or short periods of success could be a signal of poor
that experts with higher self-ratings succumb to stronger overoptimism rather than good judgment. There is also a considerable potential that
and overconfidence biases in their long-term forecasts (see also past success leads to overconfidence in experts (Hilary and
Hilary and Menzly, 2006; Kahneman and Klein, 2009). Menzly, 2006), so that the accuracy of their judgments may decrease
Rowe et al. (1991) conclude that self-ratings may not help to identify rather than increase in the long run. Most importantly, however, past
true experts, but only those who “believe themselves to be experts” (p. performance is meaningful only in the context of Poli's (2011) type 1 of
24) and advise against assessing experts’ know-how by means of self- forecasting tasks, which extrapolate from the past into the future and
ratings only. Other research produced a more ambivalent finding: high which are based on “judgment” rather than “vision.” In connection to
self-rated expertise is positively related to accurate first round predic- Shanteau's (2015) critique introduced earlier in this article, perfor-
tions in Delphi-like procedures, but is less efficient as a predictor of mance-based measures inherently presume repetitive tasks with eva-
ultimate accuracy (Rowe and Wright, 1996). In general, however, luable outcomes and need to be reviewed with regard to their

8
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

usefulness for a particular research design in the context of foresight. domains attributed to experts: self-confidence, communication skills,
Correlating expertise with performance also leaves little space for in- stress tolerance, and others (see Shanteau, 1988 for an overview). In
evitabilities, like coincidence or intuition, and fails to acknowledge connection with the sociological aspects raised earlier, trait profile
non-quantifiable, non-observable aspects about experts, such as tacit approaches assume a certain “style” in experts that consists of both
knowledge. Moreover, as Bolger and Wright (2017) and Bedard (1989) cognitive and sociological elements. A domain example is Abdolmo-
argued, most fields – with the usual exceptions of weather forecasting, hammadi and Shanteau's (1992) study, which evaluates the relative
stock market forecasting and weekly demand forecasting – lack objec- importance of 20 different attributes associated with auditors ranging
tive data on experts’ performances. Because they require short-term from cognitive/knowledge attributes to self-presentation and personal
forecasts and actual realizations in order to measure and weight expert appearance.
performance, these methods strongly link with statistical forecasting It is obvious that tests examining cognitive characteristics of po-
and are of limited use for most types of foresight studies. tential participants include an extraordinary amount of preparation
work and psychological expertise on behalf of the research team, and
4.1.8. Knowledge tests put high demands on research subjects whose readiness to participate in
Knowledge-based approaches derive their assessments from the lengthy studies is often limited. It also depends on the research field
degree to which subjects know “hard facts.” In early laboratory ex- whether researchers can rely on previous insights into experts’ domain-
periments, RAND categorized the knowledgeability of individuals based specific cognitive processes or whether much of this still remains im-
on almanac questions derived from statistical and descriptive data that plicit, obscure or undocumented (Hoffman and Lintern, 2006). More-
may cover the entire world, including recent historical events, weather over, leading scholars such as Ericsson and Lehmann (1996) have de-
forecasts, tide tables and many others (Dalkey and Brown, 1971). The monstrated that measures of basic mental capacities are not predictive
advantage of this procedure is that answers are already known. Alter- of expert performance, especially if the task is unfamiliar and new.
natively, researchers used other types of questions whose answers can While the psychological traits perspective may potentially help to
be evaluated within a constrained time frame (e.g. gas prices, election identify domain experts, it remains unclear how to ensure high per-
outcomes) (Dalkey and Brown, 1971; Parente and Anderson- formance for foresight-related tasks. Lipinski and Loveridge (1982) list
Parente, 2011). A variant of such procedures are “tree approaches” that three types of competencies needed: (1) substantive knowledge (and,
develop a sequential series of questions asked until the respondents self- we would add, experience) in a particular field; (2) the ability to cope
disqualify (or qualify) for the judgment task (Gordon and Glenn, 1994). when faced with an uncertain extension of his substantive knowledge;
Recent research, it seems, does not pursue intelligence/knowledge and (3) imagination. Whereas most of the approaches listed so far cover
testing anymore (for an exception, see Burgman et al., 2011), which the first aspect of domain expertise, the second (action under un-
may have to do with a range of research strategic problems. First of all, certainty) and the third (imagination) are rarely subject to considera-
such kinds of tests omit “less knowledgeable” individuals during the tion. To note, researchers have begun to expand selection criteria be-
process, which may cause feelings of disrespect and embarrassment or yond domain expertise, for instance by opting for “mixed” panels
simply waste intellectual resources. Testing candidates against a par- including both domain and process experts (Miles et al., 2016) or by
ticular repertoire of knowledge also presupposes homogeneity among conducting face-to-face interviews in order to sort out “remarkable
them, which is at odds with the heterogeneity, complexity and diversity people” among them who are able to “think the unthinkable”
in knowledges often desired in studies of the future (Tichy, 2004). But (Chermack, 2011; Van der Heijden, 2011). The latter “remarkability”
the most convincing argument against knowledge tests, we think, is the approach is strongly reminiscent of the idea of genius forecasting,
psychological insight that not knowledge itself, but the pattern-based which tacitly assumes that greatness in forecasting extraordinary events
organization of this knowledge distinguishes experts from novices arises from chance and exceptional, unique talent, or what Chi (2006)
(Hoffman and Lintern, 2006; Hoffman et al., 1995; Shanteau, 1993). In called absolute expertise. Other scholars described these types of fore-
other words, novices may sometimes know as much as experts, but they casts as “unanchored events” (Gordon and Glenn, 2003) and the per-
lack the ability to act (in cognitive terms) like an expert. As we have sons making them as highly intelligent, assertive personalities
seen in Section 2.3., psychological traits, cognitive skills, decision (Bishop et al., 2007). In its basic assumption, the idea of genius fore-
strategies and task characteristics play a considerable role in expert casting differs from the relative approaches that classify levels of pro-
performance. ficiencies from novice to master (e.g. Chi, 2006; Dreyfus and
Dreyfus, 2005), instead assuming that something other than analytical
4.1.9. Psychological traits capabilities or conventional expertise becomes important to forecast
Scholars following the psychological traits approach propose that extraordinary events. By highlighting the usually neglected, critical role
external criteria may be indicative, but not conclusive to identify per- of creativity and imagination of panelists, it provides a useful starting
sons who truly think like an expert. Expert cognition, as we have ground for deepening investigation of what characterizes a “foresight
learned, includes a conceptual, principled understanding of problems expert.” A pioneering attempt pushing further in this direction is
and an extensive, more “abstract,” interrelated and networked, “mental Hines et al. (2017) collective effort to define foresight competencies,
model”-based framing of information. Shanteau et al. (2002) argued for which we introduced earlier in section 3.
combining two measures to identify domain experts. The first is called
intrajudge reliability or “within-person reliability” (Shanteau et al., 4.2. Combined approaches
2002) – a measure to analyze individuals’ inner consistency, for ex-
ample the ability to reproduce judgments in similar situations While we disentangled diverse methods for the sake of achieving an
(Bolger and Wright, 1994; Einhorn, 1974). The second measure is dis- overview, research practitioners often mix and integrate them. Given
crimination ability, i.e. the ability of an expert to draw more fine and the shortcomings implied in the various expert-identification proce-
subtle differences than proficient performers (Einhorn, 1974; Ngo et al., dures, several researchers recommended combining different assess-
2016; Weiss and Shanteau, 2003). Beyond intradjudge reliability and ment and measurement tools (see Grenier and Germain (2014) for an
discrimination ability, it is also possible to test the degree to which overview). Such combinational approaches are in line with methods of
experts draw analogies, their ability to recall situations similar to a “proficiency scaling” established in cognitive psychology, which build
target case and to determine their similarity with the target on external cues, professional standards, performance measures and
(Goodwin and Wright, 2010; Green and Armstrong, 2007b). In com- social acclamation (Hoffman, 1998).
bining the above and other criteria, researchers may also create more While there have been several calls for the quantification of ex-
complex “trait profiles” that integrate psychological and personality pertise, for instance in the context of human resource development

9
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

(Germain and Tejeda, 2012; Herling, 2000; Swanson et al., 2001), such 4.3. Selection from a pool of experts
operationalization is a highly complex matter. The above short in-
troduction into cognitive perspectives on expertise already indicates a Popular foresight projects, such as the Millennium Project, have ac-
whole range of aspects to be considered. Up to date, there is no existing cess to a pool of experts, who are readily available. The challenge is to
standard tool to measure expertise across domains (Germain and select experts who have the appropriate expertise for a certain topic at
Tejeda, 2012), however, several scholars engaged in preliminary efforts hand. For instance, researchers may match task requirements to re-
of developing such an instrument. To begin with, Okoli and spondents’ expertise by comparing a priori determined and cross-
Pawlowski (2004) developed what they call the Knowledge Resource checked attributes of participants via a scoring matrix (Shanteau et al.,
Nomination Worksheet. This Worksheet involves identifying relevant 2002). By taking weighted sums, each expert can be assigned a score or
disciplines and organizations, and individual experts within these who rank and the persons with top scores can be selected for a panel. This
either agree to participate or recommend others to do so, resulting in a approach differs from the above past performance and trait profile
ranked list based on qualifications within this crowd. The basic ratio- approaches insofar as yet-existent socio-psychological profiles are
nale is social acclamation. Johanna and van der Heijden's (2000) tool adapted to the task characteristics. Another idea is to analyze narrative
combines self-ratings with external ratings (employees and their su- material submitted by the respondents of a survey and automatically
pervisors) and consists of five scales reflecting knowledge, meta-cog- search for keywords or constructs in this data to identify their expertise
nitive knowledge, skills, social recognition, growth and flexibility. Two for more specific judgment tasks (an approach also called “lock-and-
of the limitations addressed by the authors were large discrepancies key”, (Gordon, 2009)). Hoffman and Lintern (2006) propose to conduct
between managers’ and employees’ judgment and high intra-scale social interaction analysis, i.e. to develop a sociogram of interaction
correlations, both of which call for further investigation based on this patterns between people in a particular field, firm or other professional
exploratory research (Johanna and Van der Heijden, 2000). context, in order to identify clusters of experts or processes and work-
Germain and Tejeda's (2012) GEM tool essentially builds on peer flows.
assessment, combining objective indicators of expertise and some more With continuing developments in the usage of computers for auto-
subjective items. The GEM has been applied to a range of occupations in mated search processes, researchers have begun to examine how search
education, medicine and general management (Grenier and engines and data mining software programs can be used to identify
Germain, 2014). The CWS (Cochran, Weiss, Shanteau) instrument in- experts. They make use of artificial intelligence technologies like data
stead targets experts’ inner characteristics, by combining the above- mining and clustering techniques to identify the best-matching experts
mentioned cognitive measures of intrajudge (within-person) reliability within a community (Becerra-Fernandez, 2000; Breslin et al., 2007;
and discrimination ability into a domain-independent instrument for Liu et al., 2005). Data on experts might be sourced in Internet-based
measuring expertise. While this tool has been successfully applied to discussions, email communication, public profiles and online commu-
fields such as air traffic control (Pauley et al., 2009) or pitch judgments nities (Breslin et al., 2007), in intranets or personal webpages, via
in music (Ngo et al., 2016), the authors admit that data can only be Wikipedia, Google Scholar, or one may use yet existent platforms with
interpreted relatively, not absolutely. In other words, the CWS tool an elaborated search function, such as ResearchGate or LinkedIn
helps to compare between differently qualified candidates, but does not (Yimam-Seid and Kobsa, 2003). Employing these technologies, research
display a distribution of expertise within the population, so that iden- teams developed early versions of expert locating systems, such as the
tified “experts” may not be the very experts in a certain branch or “Expertise Browser” (Mockus and Herbsleb, 2002), “People Finder” or
domain. In effect, expert judgment may yield high CWS, but high CWS “Expert Finder” and other tools (Hughes and Crowder, 2009). They
does not guarantee expertise (see Grenier and Germain, 2014). bear the advantage that they (potentially) discover and identify in-
Spencer and Spencer's (2008) Technical Scale combines four dimen- tellectual capital within a certain domain or organization, such as the
sions: depth of knowledge, breadth, acquisition of expertise, and dis- NASA (Becerra-Fernandez, 2000), in automated ways. However, since
tribution of expertise. Bolger and Wright (2017) hint at a problem these systems usually face problems in distinguishing between experts
within this framework, namely that “recognized authority” (type of and knowledgeable users, they often integrate peer recommendation,
position in an organization, see external cues section) is, against most thus being prone to social influence biases outlined earlier. Moreover,
definitions of expertise, valued as the highest scale. Some hybrid ap- the initial costs and efforts to create such a system are considerable
proaches combine quantitative modeling with expert judgments. (Yimam-Seid and Kobsa, 2003). However, with further developments in
Alvarado-Valencia et al. (2017), for instance, build on a priori in- computer-aided social profiling, matching expertise and task require-
dicators of expertise (CV/peer assessment) to categorize experts and ments can become a valuable instrument. Other measures such as past
simultaneously use past performance (forecasting ability) as the de- performance, self-assessments, and external cues could be included to
pendent variables. generate a comprehensive overview of available experts.
It is our general impression that none of these tools has moved
beyond a preliminary stage of development, which may have to do with 5. Conclusion and managerial implications
the complexity of the task and the demand for domain specificity that
limits the use value of these instruments. We are not aware of attempts Our review describes techniques of selecting experts for foresight
of applying these instruments to the context of foresight. In most cases, studies. All of the methods we have listed bear some methodological
foresight researchers opt for a simple combination of assessing experts’ weaknesses; some demand very time-intensive preparation work, others
knowledge and experience on the basis of external cues (e.g. are particularly prone to biases. Thus, our review provides researchers a
Van Zolingen and Klaassen, 2003). However, the “Foresight Compe- menu of possibilities. If political stakes are high and foresight projects
tency Model” by Hines et al. (2017) represents an important effort in target producing social change, it seems wise to pick from the socio-
this direction, since it seeks to identify, in the context of applied fore- logical methods toolbox: social acclamation secures the inclusion of ex-
sight research, “what one ought to be able to do as a professional fu- perts beyond a known circle and decreases the likeliness of dropout.
turist” (p. 123). Importantly, this model not only accounts for domain- The criteria of political influence and personal involvement potentially
specific expertise, but also what we called process expertise or perfor- enable an engaged debate between high-impact actors who undergo a
mative expertise, i.e. the task-related skills desired for foresight ex- process of collective learning and consensus-building along the way. If,
ercises. on the other hand, the foresight process demands high technological
expertise, because it concerns future developments in highly specialized
markets, researchers may want to secure the domain expertise (which
may also include intensive user/consumer experience) of research

10
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

subjects. In this latter case, the research team should combine different orientation and guiding rationales to scholars confronted with the task
external cues-based measures, self-ratings or even psychological traits. If a to design a panel or to select individuals among a bigger crowd.
field is highly networked already and potential experts intensely use However, we realized that expertise in foresight needs further theori-
web platforms for communication and self-presentation, it makes sense zation beyond the initial attempt by Hines et al. (2017). A large-scale
to draw strategically from these digital networks. study that gathers and tracks experts’ performance in different domains
It follows from our research that the status of experts should not be of foresight may provide further insight. Moreover, additional research
self-evident but involve consideration as to 1) how one defines an ex- is required regarding the training of experts and the synergy between
pert (e.g. as a person of impact, in cognitive terms, in terms of domain models and expert opinion. By being aware of their own biases, experts
knowledge etc., see Sections 2.1.-2.3.) and whether selection methods can considerably improve their judgment results. Future research can
correspond to these assumptions (see Table 1). For instance, you may work with this kind of focus and promises remarkable results. In
consider whether the study objective demands respondents 1) with Section 4.1.9. (Psychological Traits) we have referred to fields of re-
maximum factual knowledge of the topic, 2) with high level of imagi- search such as unanchored events and genius forecasting, which still
nation or inventiveness, or 3) representing a particular interest group or seem under-examined, as well as in comparison to the other domains.
concept, as might be the case in allocating funds to competing research
projects. In the first case, the nomination of experts might be based on CRediT authorship contribution statement
perceived levels of expertise (knowledge and experience), with the
more expert the better. In the second case, inventiveness and ability to Stefanie Mauksch: Conceptualization, Methodology, Validation,
challenge the “common wisdom” (as noted below) might be an im- Writing - original draft, Writing - review & editing, Project adminis-
portant attribute. Finally, in the third case, the nomination process tration. Heiko A. von der Gracht: Methodology, Writing - original
would be deigned to find articulate people committed to each of the draft, Writing - review & editing, Project administration. Theodore J.
conflicting positions. Beyond this general rationale, our review leads us Gordon: Methodology, Writing - original draft, Writing - review &
to the following conclusions and recommendations: editing.

1 Without a great deal of care, and the use of systematic methods of Acknowledgments
selection, the choice of experts may involve hidden biases of the
researchers themselves, especially if they rely on external cues (see We would like to particularly thank Dr. Philipp Ecken for inspiring
Section 4.1.5.). discussions, thoughtful comments and his valuable contributions to the
2 Note that it is impossible to choose experts who are unbiased or, in manuscript. Further, we would like to thank Patricia Goren, Nick Lange
the aggregate, a fully unbiased panel. The task is to become aware of and Cody Torgerson for their support of final professional proofreading.
and mitigate the biases that may affect a panel of experts.
3 If the project aims at probabilistic statements and involves a re- Supplementary materials
petitive form of judgment (forecasting), researchers should attend to
the behavioral view (2.2.) and strive for reducing potential biases. Supplementary material associated with this article can be found, in
Past-performance approaches (4.1.7.) can be meaningful in this the online version, at doi:10.1016/j.techfore.2020.119982.
context.
4 If the project seeks to create structured dialog, vision and strategies References
to act upon long-term futures or even to “construct” a future
(foresight), biases and dissent can be desirable. Biases may indicate Abdolmohammadi, M.J., Shanteau, J., 1992. Personal attributes of expert auditors. Organ
divergence in experiences of knowledgeable experts who see alter- Behav. Hum. Decis. Process 53 (2), 158–172.
Adler, M., Ziglio, E., 1996. Gazing into the oracle: The Delphi Method and Its Application
native paths, challenge taken-for-granted wisdom and broaden the to Social Policy and Public Health. Jessica Kingsley Publishers, London.
horizon of the thinkable. Accordingly, researchers in the field of Agnew, N.M., Ford, K.M., Hayes, P.J., 1994. Expertise in context: personally constructed,
foresight often opt for sociological assessment criteria, such as social socially selected, and reality-relevant? Int. J. Expert Syst. 7 (1), 65–88.
Almandoz, J., Tilcsik, A., 2016. When experts become liabilities: domain experts on
acclamation, political influence and/or personal involvement (see boards and organizational failure. Acad. Manag. J. 59 (4), 1124–1149.
Sections 2.1, 4.1.2–4.1.4.). Alvarado-Valencia, J., Barrero, L.H., Önkal, D., Dennerlein, J.T., 2017. Expertise, cred-
5 If self-assessments (4.1.6.) are used, they should either be combined ibility of system forecasts and integration methods in judgmental demand fore-
casting. Int. J. Forecast. 33 (1), 298–313.
with a form of assessment based on external cues (4.1.5.) or be Andersen, P.D., Johnston, R., Saritas, O., 2017. FTA and innovation systems. Technol
employed in an intraindividual approach in which subjects compare Forecast Soc Change 115, 236–239.
their own performance in a stationary process (for details, see Armstrong, J.S., 1980. The seer-sucker theory: the value of experts in forecasting. Technol
Rev 16–24.
Ward et al., 2002).
Armstrong, J.S., 1985. Long-Range Forecasting. John Wiley, New York.
6 Consider the option of training experts (see section 3.) for the pur- Atanasov, P., Rescober, P., Stone, E., Swift, S.A., Servan-Schreiber, E., Tetlock, P., et al.,
pose of the study in ways that a) improve competences that are 2016. Distilling the wisdom of crowds: prediction markets vs. prediction polls.
needed for the judgment process and/or b) allow participants to Manage Sci 63 (3), 691–706.
Baker, J., Lovell, K., Harris, N., 2006. How expert are the experts? An exploration of the
reflect on and learn from their own judgment performance. concept of ‘expert'within Delphi panel techniques. Nurse Res. 14 (1).
7 Note that selection methods based on past performance demand pre- Becerra-Fernandez, I. (2000). Facilitating the online search of experts at NASA using expert
existent data (see Section 4.1.7.). seeker people-finder.
Bedard, J., 1989. Expertise in auditing: myth or reality? Account. Organ. Soc. 14 (1),
8 Note that selection methods based on knowledge tests (4.1.8.) and 113–131.
psychological traits (4.1.9.) are complex and time-intensive. Bedard, J., Chi, M.T., 1992. Expertise. Curr. Dir. Psychol. Sci. 1 (4), 135–139.
9 Consider whether the study setup demands capabilities that are Belton, I., MacDonald, A., Wright, G., Hamlin, I., 2019. Improving the practical appli-
cation of the Delphi method in group-based judgment: a six-step prescription for a
rarely subject to consideration, such as imagination or creativity well-founded and defensible process. Technol. Forecast. Soc. Change 147, 72–82.
(see Lipinski and Loveridge, 1982). Bereiter, C., Scardamalia, M., 1993. Surpassing Ourselves: An Inquiry Into the Nature and
10 If possible, strive for a combination of different selection methods Implications of Expertise. Open Court, Chicago.
Best, R.J., 1974. An experiment in Delphi estimation in marketing decision making. J.
(4.2.) and consider the option of using expert pools (4.3.). Market. Res. 11, 447–468.
Bishop, P., Hines, A., Collins, T., 2007. The current state of scenario development: an
Our review tapped into three disciplinary angles and their views on overview of techniques. Foresight 9 (1), 5–25.
Bolger, F., Wright, G., 1994. Assessing the quality of expert judgment: issues and analysis.
expertise, with the aim to provide a deeper understanding of expert
Decis. Support Syst. 11 (1), 1–24.
selection. The strength of our review lies in offering a preliminary

11
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

Bolger, F., Wright, G., 2017. Use of expert knowledge to anticipate the future: issues, Ericsson, K.A., Prietula, M.J., Cokely, E.T., 2007. The making of an expert. Harv. Bus. Rev.
analysis and directions. Int. J. Forecast. 33 (1), 230–243. 85 (7/8), 114.
Bonaccorsi, A., Apreda, R., Fantonia, G., 2020. Expert biases in technology foresight. why Eyal, G., 2013. For a sociology of expertise: the social origins of the autism epidemic. Am.
they are a problem and how to mitigate them. Technol. Forecast. Soc. Change 151 J. Sociol. 118 (4), 863–907.
(February), 119855. Förster, B., 2015. Technology foresight for sustainable production in the German auto-
Borison, A., Hamm, G., 2010. Prediction markets: a new tool for strategic decision motive supplier industry. Technol. Forecast. Soc. Change 92, 237–248.
making. Calif. Manage Rev. 52 (4), 125–141. Gabriel, J., 2013. A scientific enquiry into the future. Eur. J. Fut. Res. 2 (1), 31.
Breslin, J.G., Bojars, U., Aleman-Meza, B., Boley, H., Mochol, M., Nixon, L.J.B., et al., Germain, M.L., Tejeda, M.J., 2012. A preliminary exploration on the measurement of
2007. Finding experts using Internet-based discussions in online communities and expertise: an initial development of a psychometric scale. Hum. Resour. Develop. Q.
associated social networks. In: Paper presented at the The 1st International 23 (2), 203–232.
ExpertFinder Workshop (EFW 2007). Gigerenzer, G., Gaissmaier, W., Kurz-Milcke, E., Schwartz, L.M., Woloshin, S., 2007.
Burgman, M.A., Carr, A., Godden, L., Gregory, R., McBride, M., Flander, L., et al., 2011a. Helping doctors and patients make sense of health statistics. Psychol. Sci. Public
Redefining expertise and improving ecological judgment. Conserv Lett. 4 (2), 81–87. Interest 8 (2), 53–96.
Burgman, M.A., McBride, M., Ashton, R., Speirs-Bridge, A., Flander, L., Wintle, B., et al., Goodwin, P., Wright, G., 2010. The limits of forecasting methods in anticipating rare
2011b. Expert status and performance. PLoS One 6 (7), 1–7. events. Technol. Forecast. Soc. Change 77 (3), 355–368.
Calof, J., Smith, J.E., 2010. Critical success factors for government-led foresight. Sci. Gordon, T.J., 2009. Delphi. In: Glenn, J.C., Gordon, T.J. (Eds.), The Millenium Project,
Public Policy 37 (1), 31–40. New York, NY, pp. 1–30.
Chakravarti, A.K., Vasanta, B., Krishnan, A.S.A., Dubash, R.K., 1998. Modified delphi Gordon, T.J., Glenn, J.C., 1994. An introduction to the Millennium Project. Technol.
methodology for technology forecasting case study of electronics and information Forecast. Soc. Change 47 (2), 147–170.
technology in India. Technol. Forecast. Soc. Change 58 (1), 155–165. Gordon, T.J., Glenn, J.C., 2003. Integration, comparisons, and frontiers of futures re-
Chermack, T.J., 2011. Scenario Planning in organizations: How to Create, Use, and Assess search methods. In: Futures Research Methodology (Version 2.0). AC/UNU
Scenarios. Berrett-Koehler Publishers, San Francisco, CA. Millennium Project, Washington, DC.
Chi, M.T.H., 2006. Two approaches to the study of expert characteristics. In: Ericsson, Graham, B., Regehr, G., Wright, J.G., 2003. Delphi as a method to establish consensus for
K.A., Charness, N., Feltovich, P.J., Hoffman, R.R. (Eds.), The Cambridge Handbook of diagnostic criteria. J. Clin. Epidemiol. 56 (12), 1150–1156.
Expertise and Expert Performance. Cambridge University Press, Cambridge, pp. Green, K.C., Armstrong, J.S., 2007a. The Ombudsman: value of expertise for forecasting
21–30. decisions in conflicts. INFORMS J. Appl. Anal. 37 (3), 287–299.
Cooke, R., 1991. Experts in Uncertainty: Opinion and Subjective Probability in Science. Green, K.C., Armstrong, J.S., 2007b. Structured analogies for forecasting. Int. J. Forecast.
Oxford University Press, Oxford. 23 (3), 365–376.
Croce, V., Wöber, K., Kester, J., 2016. Expert identification and calibration for collective Greenwald, A.G., Banaji, M.R., 1995. Implicit social cognition: attitudes, self-esteem, and
forecasting tasks. Tourism Econ. 22 (5), 979–994. stereotypes. Psychol. Rev. 102 (1), 4.
Cuhls, K., 2003. From forecasting to foresight processes—new participative foresight Grenier, R.S., Germain, M.L., 2014. Expertise through the HRD lens. In: Chalofsky, N.E.,
activities in Germany. J. Forecast. 22 (2–3), 93–111. Rocco, T.S., Morris, M.L. (Eds.), Handbook of Human Resource Development. Wiley,
Cuhls, K., Georghiou, L., 2004. Evaluating a participative foresight process: ‘Futur - the New York, pp. 181–200.
German research dialogue. Res. Eval. 13 (3), 143–153. Hallowell, M.R., Gambatese, J.A., 2010. Qualitative research: application of the Delphi
Dakin, S., Armstrong, J.S., 1989. Predicting job performance: a comparison of expert method to CEM research. J. Constr. Eng. Manag. 136 (1), 99–107.
opinion and research findings. Int. J. Forecast. 5 (2), 187–194. Hambrick, D.Z., Campitelli, G., Macnamara, B.N., 2017. The Science of Expertise:
Dalkey, N., Brown, B., 1971. Comparison of group judgment techniques with short-range Behavioral, Neural, and Genetic Approaches to Complex Skill. Routledge, New York.
predictions and almanac questions. RAND Corp Retrieved from. http://www.dtic. Hanea, A.M., McBride, M.F., Burgman, M.A., Wintle, B.C., 2018. The value of perfor-
mil/cgi-bin/GetTRDoc?AD=AD0728741. mance weights and discussion in aggregated expert judgments. Risk Anal. 38 (9),
Dalkey, N., Brown, B., Cochran, S., 1970. Use of self-ratings to improve group estimates: 1781–1794.
experimental evaluation of Delphi procedures. Technol. Forecast. 1 (3), 283–291. Harvey, N., 1997. Confidence in judgment. Trends Cogn. Sci. (Regul. Ed.) 1 (2), 78–82.
Davis, D.A., Mazmanian, P.E., Fordis, M., Van Harrison, R., Thorpe, K.E., Perrier, L., 2006. Hasson, F., Keeney, S., 2011. Enhancing rigour in the Delphi technique research. Technol.
Accuracy of physician self-assessment compared with observed measures of compe- Forecast. Soc. Change 78 (9), 1695–1704.
tence. JAMA 296 (9), 1094–1102. Hasson, F., Keeney, S., McKenna, H., 2000. Research guidelines for the Delphi survey
Day, D.V., Lord, R.G., 1992. Expertise and problem categorization: the role of expert technique. J. Adv. Nurs. 32 (4), 1008–1015.
processing in organizational sense-making. J. Manag. Stud. 29 (1), 35–47. Hedlund, J., Forsythe, G.B., Horvath, J.A., Williams, W.M., Snook, S., Sternberg, R.J.,
de Loe, R.C., 1995. Exploring complex policy questions using the policy Delphi: a multi- 2003. Identifying and assessing tacit knowledge: understanding the practical in-
round, interactive survey method. Appl. Geogr. 15 (1), 53–68. telligence of military leaders. Leadersh Q. 14 (2), 117–140.
de Loë, R.C., Melnychuk, N., Murray, D., Plummer, R., 2016. Advancing the state of policy Herling, R.W., 2000. Operational definitions of expertise and competence. Adv. Dev.
Delphi practice: a systematic review evaluating methodological evolution, innova- Hum. Resour. 2 (1), 8–21.
tion, and opportunities. Technol. Forecast. Soc. Change 104, 78–88. Herling, R.W., Provo, J., 2000. Knowledge, competence, and expertise in organizations.
Denrell, J., Fang, C., 2010. Predicting the next big thing: success as a signal of poor Adv. Dev. Hum. Resour. 2 (1), 1–7.
judgment. Manage Sci. 56 (10), 1653–1667. Hilary, G., Menzly, L., 2006. Does past success lead analysts to become overconfident.
Devaney, L., Henchion, M., 2018. Who is a Delphi ‘expert’? Reflections on a bioeconomy Manage Sci. 52 (4), 489–500.
expert selection procedure from Ireland. Futures 99, 45–55. Hines, A., Gary, J., Daheim, C., van der Laan, L., 2017. Building foresight capacity: to-
Devine, D.J., Kozlowski, S.W.J., 1995. Domain-Specific knowledge and task character- ward a foresight competency model. World Fut. Rev. 9 (3), 123–141.
istics in decision making. Organ Behav Hum Decis Process 64 (3), 294–306. Hoffman, R.R., 1998. How can expertise be defined? implications of research from cog-
Dorr, A., 2017. Common errors in reasoning about the future: three informal fallacies. nitive psychology. In: Williams, R., Faulkner, W., Fleck, J. (Eds.), Exploring Expertise.
Technol. Forecast. Soc. Change 116, 322–330 (March). Palgrave Macmillan, London, pp. 81–100.
Dreyfus, H.L., Dreyfus, S.E., 2005. Peripheral vision: expertise in real world contexts. Hoffman, R.R., Lintern, G., 2006. Eliciting and representing the knowledge of experts. In:
Organ. Stud. 26 (5), 779. Ericsson, K.A., Charness, N., Feltovich, P.J., Hoffman, R.R. (Eds.), Cambridge
Dufva, M., Ahlqvist, T., 2015. Knowledge creation dynamics in foresight: a knowledge Handbook of Expertise and Expert Performance. Cambridge University Press,
typology and exploratory method to analyse foresight workshops. Technol. Forecast. Cambridge, pp. 203–222.
Soc. Change 94, 251–268. Hoffman, R.R., Shadbolt, N.R., Burton, A.M., Klein, G., 1995. Eliciting knowledge from
Ecken, P., Gnatzy, T., von der Gracht, H.A., 2011. Desirability bias in foresight: con- experts: a methodological analysis. Organ Behav. Hum. Decis. Process. 62 (2),
sequences for decision quality based on Delphi results. Technol Forecast Soc Change 129–158.
78 (9), 1654–1670. https://doi.org/10.1016/j.techfore.2011.05.006. In this issue. Hommel, U., Prokesch, T., Wohlenberg, H., von der Gracht, H.A., 2019. Effects of sup-
Einhorn, H.J., 1974. Expert judgment: some necessary conditions and an example. J. plying additional information: experimental evidence on the behavior of capital
Appl. Psychol. 59 (5), 562–571. market experts. Futures & Foresight Science 1 (3–4), e21. https://doi.org/10.1002/
Einhorn, H.J., Hogarth, R.M., 1978. Confidence in judgment: persistence of the illusion of ffo2.21.
validity. Psychol. Rev. 85 (5), 395–416. Honda, H., Washida, Y., Sudo, A., Wajima, Y., Awata, K., Ueda, K., 2017. The difference in
Engelke, H., Mauksch, S., Darkow, I.-.L., von der Gracht, H., 2016. Heading toward a foresight using the scanning method between experts and non-experts. Technol.
more social future? Scenarios for social enterprises in Germany. Bus. Soc. 55 (1), Forecast. Soc. Change 119, 18–26.
56–89. Hughes, G., Crowder, R., 2009. Experiences in designing highly adaptable expertise finder
Ericsson, K.A., 2006a. The influence of experience and deliberate practice on the devel- systems. In: Design Engineering Technical Conference, September 2-6.
opment of superior expert performance. In: Ericsson, K.A. (Ed.), The Cambridge Hunt, E., 2006. Expertise, talent, and social encouragement. In: Ericsson, K.A., Charness,
Handbook of Expertise and Expert Performance. Cambridge University Press, N., Feltovich, P.J., Hoffman, R.R. (Eds.), Cambridge Handbook of Expertise and
Cambridge, pp. 685–705. Expert Performance. Cambridge University Press, Cambridge, pp. 31–38.
Ericsson, K.A., 2006b. An introduction to the cambridge handbook of expertise and expert Hussler, C., Muller, P., Rondé, P., 2011. Is diversity in Delphi panelist groups useful?
performance: its development, organization and content. In: Ericsson, K.A., Charness, evidence from a french forecasting exercise on the future of nuclear energy. Technol.
N., Feltovich, P.J., Hoffman, R.R. (Eds.), The Cambridge Handbook of Expertise and Forecast. Soc. Change 78 (9), 1642–1653.
Expert Performance. Cambridge University Press, Cambridge, pp. 3–19. Jacob, J., Lys, T.Z., Neale, M.A., 1999. Expertise in forecasting performance of security
Ericsson, K.A., Krampe, R.T., Tesch-Römer, C., 1993. The role of deliberate practice in the analysts. J. Account. Econ. 28 (1), 51–82.
acquisition of expert performance. Psychol Rev. 100 (3), 363. Johanna, B.I., Van der Heijden, M., 2000. The development and psychometric evaluation
Ericsson, K.A., Lehmann, A.C., 1996. Expert and exceptional performance: evidence of of a multidimensional measurement instrument of professional expertise. High
maximal adaptation to task constraints. Annu. Rev. Psychol. 47 (1), 273–305. Ability Stud. 11 (1), 9–39.

12
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

Kahneman, D., Klein, G., 2009. Conditions for intuitive expertise: a failure to disagree. RAND Corp.
Am. Psychol. 64 (6), 515–526. Shanteau, J., 1988. Psychological characteristics and strategies of expert decision makers.
Kahneman, D., Lovallo, D., 1993. Timid choices and bold forecasts: a cognitive per- Acta Psychol. 68 (1–3), 203–215.
spective on risk taking. Manage Sci. 39 (1), 17–31. Shanteau, J., 1992. Competence in experts: the role of task characteristics. Organ. Behav.
Karlsen, J.E., 2014. Design and application for a replicable foresight methodology brid- Hum. Decis. Process. 53 (2), 252–266.
ging quantitative and qualitative expert data. Eur. J. Fut. Res. 2 (1), 40. Shanteau, J., 1993. Discussion of expertise in auditing. Auditing 12, 51–56.
Kawamoto, C., Wright, J.T.C., Spers, R.G., Carvalho, D.E.D., 2017. Self-Rating in a Delphi- Shanteau, J., 2015. Why task domains (still) matter for understanding expertise. J. Appl.
like experiment. Acad. Manag. Proc. 2017 (1), 10200. Res. Mem. Cogn. 4 (3), 169–175.
Kawamoto, C., Wright, J.T.C., Spers, R.G., de Carvalho, D.E., 2019. Can we make use of Shanteau, J., Weiss, D.J., Thomas, R.P., Pounds, J.C., 2002. Performance-based assess-
perception of questions' easiness in Delphi-like studies? Some results from an ex- ment of expertise: how to decide if someone is an expert or not. Eur. J. Oper. Res. 136
periment with an alternative feedback. Technol. Forecast. Soc. Change 140, 296–305. (2), 253–263.
Keeney, S., Hasson, F., McKenna, H., 2006. Consulting the oracle: ten lessons from using Shin, T., 1998. Using Delphi for a long-range technology forecasting, and assessing di-
the Delphi technique in nursing research. J. Adv. Nurs. 53 (2), 205–212. rections of future R&D activities the Korean exercise. Technol. Forecast. Soc. Change
Keeney, S., Hasson, F., McKenna, H.P., 2001. A critical review of the Delphi technique as 58 (1), 125–154.
a research methodology for nursing. Int. J. Nurs. Stud. 38 (2), 195–200. Sims, V.K., Mayer, R.E., 2002. Domain specificity of spatial expertise: the case of video
Kembro, J., Näslund, D., Olhager, J., 2017. Information sharing across multiple supply game players. Appl. Cognit. Psychol. 16 (1), 97–115.
chain tiers: a Delphi study on antecedents. Int. J. Prod. Econ. 193, 77–86. Sinha, P., Brown, L.D., Das, S., 1997. A re-examination of financial analysts' differential
Klein, G., Shneiderman, B., Hoffman, R.R., Ford, K.M., 2017. Why expertise matters: a earnings forecast accuracy. Contemp. Account. Res. 14 (1), 1–42.
response to the challenge. IEEE Intelligent Systems 32 (6), 67–73. Sjöberg, L., 2009. Are all crowds equally wise? a comparison of political election forecasts
Kolodner, J.L., 1983. Towards an understanding of the role of experience in the evolution by experts and the public. J. Forecast. 28 (1), 1–18.
from novice to expert. Int. J. Man. Mach. Stud. 19 (5), 497–518. Spencer, L.M., Spencer, P.S.M., 2008. Competence At Work: Models for Superior
Kuchinke, K.P., 1997. Employee expertises the status of the theory and the literature. Perf. Performance. Wiley, New York.
Improv. Q. 10 (4), 72–86. Spickermann, A., Grienitz, V., von der Gracht, H.A., 2014. Heading towards a multimodal
Larreche, J.-.C., Moinpour, R., 1983. Managerial judgment in marketing: the concept of city of the future? Multi-stakeholder scenarios for urban mobility. Technol. Forecast.
expertise. J. Market. Res. 20 (2), 110–121. Soc. Change 89, 201–221.
Linstone, H., Turoff, M., 2002. The Delphi method: Techniques and Applications Sutterlüty, A., Hesser, F., Schwarzbauer, P., Schuster, K.C., Windsperger, A., Stern, T.,
(Electronic Version). New Jersey Institute of Technology, Newark, NJ. 2017. A Delphi approach to understanding varying expert viewpoints in sustain-
Lipinski, A., Loveridge, D., 1982. How we forecast. Institute for the future's study of the ability communication: the case of water footprints of bio-based fiber resources. J Ind
UK, 1978–1995. Futures 14 (3), 205–239. Ecol 21 (2), 412–422.
Liu, X., Croft, W.B., Koll, M., 2005. Finding experts in community-based question-an- Swanson, R.A., Holton, E., Holton, E.F., 2001. Foundations of Human Resource
swering services. In: Paper presented at the Proceedings of the 14th ACM Development. Berrett-Koehler Publishers, San Francisco, CA.
International Conference on Information and Knowledge Management. Bremen, Tetlock, P., 2017. Expert Political judgment: How Good is it? How can We know?
Germany. Princeton University Press, Princeton, NJ.
Loveridge, D., 2004. Experts and foresight: review and experience. Int. J. Foresight Innov Thomas, R.P., Lawrence, A., 2018. Assessment of expert performance compared across
Policy 1 (1), 33–69. professional domains. J Appl Res Mem Cogn 7 (2), 167–176.
Makkonen, M., Hujala, T., Uusivuori, J., 2016. Policy experts' propensity to change their Tiberius, V., Hirth, S., 2019. Impacts of digitization on auditing: a Delphi study for
opinion along Delphi rounds. Technol. Forecast. Soc. Change 109, 61–68. Germany. J. Int. Account. Audit. Taxat. 37, 100288.
Marchant, G., 1990. Discussion of determinants of auditor expertise. J. Account. Res. 28, Tichy, G., 2004. The over-optimism among experts in assessment and foresight. Technol.
21–28. Forecast. Soc. Change 71 (4), 341–363.
Martin, B.R., 2010. The origins of the concept of ‘foresight’ in science and technology: an Toma, C., Picioreanu, I., 2016. The Delphi technique: methodological considerations and
insider's perspective. Technol Forecast. Soc. Change 77 (9), 1438–1447. the need for reporting guidelines in medical journals. J. Int. J. Public Health Res. 4
Matheny, J., 2016. Forecasting innovation: lessons from IARPA's research programs. Res. (6), 47–59.
Technol. Manag. 59 (6), 36–40. Turoff, M., 1970. The design of a policy Delphi. Technol. Forecast. Soc. Change 2,
McDonald, D.W., 2001. Evaluating expertise recommendations. In: Paper presented at the 149–171.
Proceedings of the 2001 International ACM SIGGROUP Conference on Supporting Tversky, A., Kahneman, D., 1974. Judgment under uncertainty: heuristics and biases.
Group Work. Science 185 (4157), 1124–1131.
Meijering, J., Kampen, J., Tobi, H., 2013. Quantifying the development of agreement Van der Heijden, K., 2011. Scenarios: the Art of Strategic Conversation. Wiley, New York.
among experts in Delphi studies. Technol. Forecast. Soc. Change 80 (8), 1607–1614. van der Pas, J.W.G.M., Kwakkel, J.H., Van Wee, B., 2012. Evaluating adaptive policy-
Mikhail, M.B., Walther, B.R., Willis, R.H., 1997. Do security analysts improve their per- making using expert opinions. Technol. Forecast. Soc. Change 79 (2), 311–325.
formance with experience? J. Account. Res. 35, 131–157. Van Zolingen, S.J., Klaassen, C.A., 2003. Selection processes in a Delphi study about key
Miles, I., Saritas, O., Sokolov, A., 2016. Interaction: participation and recruitment. In: qualifications in senior secondary vocational education. Technol. Forecast. Soc.
Miles, I., Saritas, O., Sokolov, A. (Eds.), Foresight for Science, Technology and Change 70 (4), 317–340.
Innovation. Cham: Springer International Publishing, pp. 43–62. von der Gracht, H.A., 2008. The Future of logistics: Scenarios for 2025 Gabler,
Miller, G., 2001. The development of indicators for sustainable tourism: results of a Wiesbaden.
Delphi survey of tourism researchers. Tourism Manag. 22 (4), 351–362. Wagner, S.A., Vogt, S., Kabst, R., 2016. The future of public participation: empirical
Mockus, A., Herbsleb, J.D., 2002. Expertise browser: a quantitative approach to identi- analysis from the viewpoint of policy-makers. Technol. Forecast. Soc. Change 106,
fying expertise. In: Paper presented at the Proceedings of the 24th International 65–73.
Conference on Software Engineering. ICSE, 2002. Ward, M., Gruppen, L., Regehr, G., 2002. Measuring self-assessment: current state of the
Mullen, P.M., 2003. Delphi: myths and reality. J. Health Organ. Manag 17 (1), 37–52. art. Adv. Health Sci. Educ. 7 (1), 63–80.
Nedeva, M., Georghiou, L., Loveridge, D., Cameron, H., 1996. The use of co-nomination to Weiss, D.J., Shanteau, J., 2003. Empirical assessment of expertise. Human Factors 45 (1),
identify expert participants for Technology Foresight. R&D Manag. 26 (2), 155–168. 104–116.
Needham, R.D., de Loë, R.C., 1990. The policy Delphi: purpose, structure, and applica- Winkler, J., Moser, R., 2016. Biases in future-oriented Delphi studies: a cognitive per-
tion. Canadian Geographer/Le Géographe canadien 34 (2), 133–142. spective. Technol. Forecast. Soc. Change 105, 63–76 (April).
Ngo, M.K., Vu, K.-P.L., Strybel, T.Z., 2016. Effects of music and tonal language experience Wright, G., Ayton, P., 1992. Judgmental probability forecasting in the immediate and
on relative pitch performance. Am. J. Psychol. 129 (2), 125–134. medium term. Organ. Behav. Hum. Decis. Process 51 (3), 344–363.
Okoli, C., Pawlowski, S.D., 2004. The Delphi method as a research tool: an example, Yaniv, I., Kleinberger, E., 2000. Advice taking in decision making: egocentric discounting
design considerations and applications. Inf. Manag. 42 (1), 15–29. and reputation formation. Organ. Behav. Hum. Decis. Process 83 (2), 260–281.
Parente, R., Anderson-Parente, J., 2011. A case study of long-term Delphi accuracy. Yimam-Seid, D., Kobsa, A., 2003. Expert-finding systems for organizations: problem and
Technol. Forecast. Soc. Change 78 (9), 1705–1711. domain analysis and the Demoir approach. J. Organ. Comput. Electr. Commer. 13 (1),
Pauley, K., O'Hare, D., Wiggins, M., 2009. Measuring expertise in weather-related aero- 1–24.
nautical risk perception: the validity of the Cochran–Weiss–Shanteau (CWS) Index.
Int. J. Aviat. Psychol. 19 (3), 201–216. Dr. Stefanie Mauksch is a post-doctoral researcher and lecturer at University of Leipzig,
Phelps, R.H., Shanteau, J., 1978. Livestock judges: how much information can an expert Institute of Anthropology, in Germany. She graduated from the Martin Luther University,
use. Organ Behav. Hum. Perform. 21 (2), 209–219. Halle, and holds a Magister Atrium degree in Social Anthropology and Media Studies. In
Poli, R., 2011. Ethics and futures studies. Int. J. Manag. Concepts Philos. 5 (4), 403–410. 2012 she received her PhD from EBS University of Business and Law, Germany. Her works
Renzi, A.B., Freitas, S., 2015. The Delphi method for future scenarios construction. Proc. have been published in several books and peer-reviewed journals, including Business &
Manuf. 3, 5785–5791. Society and the Social Enterprise Journal. For her research, she has received a Best
Richman, H.B., Gobet, F., Stastewski, J.J., Simon, H.A., 1996. Perceptual and memory Developmental Paper Award at the British Academy of Management Conference 2012.
processes in the acquisition of expert performance: the EPAM model. In: Ericsson,
K.A. (Ed.), The Road to excellence: The acquisition of Expert Performance in the Arts
and Sciences, Sports, and Games. Mahwah, Erlbaum, pp. 167–188. Dr. Heiko A. von der Gracht is Professor of Futures Studies and Foresight at Steinbeis
Rowe, G., Wright, G., 1996. The impact of task characteristics on the performance of University, School of International Business and Entrepreneurship (SIBE), in Germany.
structured group forecasting techniques. Int. J. Forecast. 12 (1), 73–89. Before he was Associate Professor at University of Erlangen–Nuremberg. He holds a PhD
Rowe, G., Wright, G., Bolger, F., 1991. Delphi: a reevaluation of research and theory. in Business Studies from EBS University of Business and Law, Germany. His-research
interests encompass corporate foresight, the Delphi and scenario techniques, foresight
Technol. Forecast. Soc. Change 39 (3), 235–251.
Sackman, H., 1974. Delphi assessment: expert opinion, forecasting, and group process. skills and education, and quality standards in futures research. His works have been

13
S. Mauksch, et al. Technological Forecasting & Social Change 154 (2020) 119982

published in several books and peer-reviewed journals, including Technological Forecasting Future. He is the author of numerous books and papers dealing with futures research
& Social Change, Journal of Business Research, and Journal of Supply Chain Management. methodology; much of his work has been associated with the Delphi method, Real-Time
Delphi, Trend Impact and Cross Impact Analysis. He also was Chief Engineer for the upper
Theodore J. Gordon is Co-founder and Board Member of the Millennium Project. He was stage of the Saturn V rocket used in the Apollo program.
also the founder and President of the Futures Group, and a founder of the Institute for the

14

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy