Dont Hate The Player
Dont Hate The Player
Dont Hate The Player
Don’t hate the player, hate the game: Realigning incentive structures to
promote robust science and better scientific practices in marketing
Steven D. Shaw , Gideon Nave *
The Wharton School of the University of Pennsylvania, 3733 Spruce Street, Philadelphia, PA 19104, United States
A R T I C L E I N F O A B S T R A C T
Keywords: Marketing research draws heavily from the methods of Psychology and Economics, where questionable research
Replication practices (QRPs) are evident and replication rates are low; it is thus likely that QRPs and low replicability rates
Mertonian norms of science are pervasive in Marketing. Here, we review proximate and systemic issues that contribute to this state, and
P-hacking
survey prominent solutions currently available to researchers for combating QRPs, namely, preregistration,
Publication bias
Preregistration
registered reports, open-science practices, and multi-verse analysis. We argue that the core of replicability issues
Registered reports is rooted in a misalignment of academic incentives structures, rather than placing blame on individual re
Academic culture searchers, and we make systemic recommendations for a pathway forward towards more robust and replicable
marketing science.
* Corresponding author.
E-mail addresses: shawsd@wharton.upenn.edu, gnave@wharton.upenn.edu (G. Nave).
1
Ironically, non-replicable publications tend to receive more citations than replicable ones (Serra-Garcia & Gneezy, 2021).
2
For a list of replication studies in Marketing research, see: https://openmkt.org/research/replications-of-marketing-studies/.
https://doi.org/10.1016/j.jbusres.2023.114129
Received 1 January 2023; Received in revised form 22 May 2023; Accepted 22 June 2023
Available online 10 July 2023
0148-2963/© 2023 Elsevier Inc. All rights reserved.
S.D. Shaw and G. Nave Journal of Business Research 167 (2023) 114129
Table 1
Summary of the Pros and Cons of Existing Solutions to the Replication Crisis in Social Science Research.
Pros Cons
QRPs, be it intentionally or unintentionally. Second, we review the main considered a QRP, but common examples of such practices include:
solutions that have been proposed and are currently implemented by selectively reporting a subset of experiments, experimental conditions,
academic journals and researchers to improve scientific rigor in the or analytical models without disclosing others; creating ad-hoc variables
field, alongside their advantages and disadvantages (see Table 1 for a using only a subset of items in a questionnaire to form a novel measure;
summary). Finally, we propose a path forward to address the core sys continuing to collect more data until significance is achieved after
temic and cultural issues underlying the prevalence of QRPs. viewing the (insignificant) result obtained from a study, or discontinu
ing data collection once interim results appear to support their hy
2. Proximate and systemic issues undermining the replicability potheses; changing participant exclusion criteria after viewing initial
of Marketing research modeling results; and reporting exploratory results as if they were
confirmatory (that is, represent ex-ante hypotheses; also known as
Given the ubiquity of methods and practices from Psychology and HARKing; Kerr, 1998).
Economics in Marketing research, and evidence of replication issues in Although QRPs come in various shapes and forms, most of them
these fundamental disciplines, it is reasonable to believe that similar involve flexibly choosing between analytical approaches that could
issues exist in the Marketing literature (Brodeur, Cook, & Heyes, 2022). easily be post-hoc justified (both to the self and when communicating
Furthermore, discussion and awareness of issues related to replication the results to others). This makes it easy for the (human) researchers
and QRPs in the Marketing literature has been less common relative to involved to deceive themselves and others into thinking ‘this is fine or
these fields. reasonable’ when a practice is applied to their own study (Shalvi et al.,
In our view, the overarching goal of any solution, proximate or 2011), even though they may condemn such an act by other researchers.
systemic, should be to promote replicability by advancing a research When commonplace, QRPs jeopardize the quality of science in a field, as
culture that adheres to the fundamental tenets of science as character they afford researchers the ability to (mis)represent effects as “statisti
ized by the four Mertonian norms of science: Communalism, Univer cally significant” (Simmons et al., 2011). Next, we argue that QRPs arise
salism, Disinterestedness, and Organized skepticism. Communalism is a because 1) humans are biased (i.e., ‘the players’), and 2) our system of
shared ownership of scientific goods and transparency; Universalism publishing and the incentive structures of the scientific career are mis
stands for an independence of scientific findings from political, social, aligned with the values of scientific pursuit (i.e., ‘the game’).
and personal implications of participants; Disinterestedness implies that Systemic issues: the incentive structure in academia. The prestige of
scientific institutions should act for the benefit of science and knowledge scientific journals is largely evaluated based on the impact of the works
rather than for personal gains; and Organized skepticism is the idea that they publish— often proxied via metrics such as citation counts (for
all science should be openly subjected to scrutiny. Keeping the Merto individual papers) and impact factors (for journals). Specifically, jour
nian norms in mind, we now discuss proximate issues in research that nals pride themselves, and are ranked, based on impact factors calcu
have likely led to low replicability, applicable both to the consumer lated as the average number of citations that the journal’s articles garner
behavior and quantitative realms (namely, QRPs). We then discuss the in a year. Most high-impact journals receive more submissions than they
systemic issues of misalignment of incentives in the academic publica are able or willing to publish, and therefore select only a small subset of
tion process as well as the hiring and promotion process in academia. submissions. To ensure that a journal’s impact factor remains high, its
Proximate issues: Questionable research practices. QRPs reflect editors tend to prefer publishing articles that are expected to attract
situations where individual researchers deviate from the Mertonian attention, fueling subsequent works and debate. Such articles typically
principle of Disinterestedness. They typically involve researchers mak contain findings that are, on one hand, surprising or counter-intuitive,
ing analytical choices that spuriously increase the chances of finding and on the other hand, provide clear-cut evidence that is supported by
evidence in support of their own hypotheses, violating the independence statistically significant results, as opposed to null or mixed evidence.
between research findings and the personal gains of the researchers This has led to the well-documented phenomenon of ‘publication bias’,
involved. Dubbed the ‘steroids of scientific competition’ (John et al., where the published literature contains mostly “positive” effects, and
2012), QRPs undermine the integrity of scientific findings, and put those null results remain hidden in the proverbial file drawer (Lane et al.,
who do comply with the Mertonian norm of Disinterestedness at a 2016; Rosenthal, 1979; Sterling, 1959).
professional disadvantage. A wide variety of terms, with partly over While most individual researchers certainly respect scientific and
lapping definitions, have been coined to describe various forms of QRPs, academic integrity, they are nonetheless ingrained into the systems and
including: p-hacking, HARKing (hypothesizing after results are known), culture created by those who preceded them (Lilienfeld, 2017). Careers
undisclosed researcher degrees of freedom, and “garden of forking in academia are made or broken based on the dogma of ‘publish or
paths” (Kerr, 1998; Nosek et al., 2015; Wicherts et al., 2016). Any perish’. Decisions regarding the allocation of limited Faculty positions,
practice that biases results in favor of one’s hypothesis would be promotions, awards, grants, and pay are all tied to researcher impact,
2
S.D. Shaw and G. Nave Journal of Business Research 167 (2023) 114129
often proxied via citation counts or h-indices, which are clearly situated preregistration—the practice of publicly commiting to various aspects of
within the journal publication systems previously discussed.3 As a a study’s research plan (e.g., experimental design, hypotheses, sample
consequence of the publication criteria of the journals, researchers— size, inclusion criteria), prior to conducting it (Nosek et al., 2015).4
who are evaluated based on their publication records—are also faced Platforms such as AsPredicted.org (AsPredicted, 2023) and the Open
with incentives to submit articles that include statistically significant Science Framework (OSF, 2023) are the most commonly used for pre
findings, and tell a novel, coherent, and conclusive story, regardless of registration in behavioral research. These platforms often allow re
the true nature of the phenomenon under investigation. Compounded searchers to hold the preregistration information under an embargo for a
with the fact that the training and pathway to a salary in a traditional period of several years, at which point the information becomes publicly
academic career is one of the longest among professional careers (Gaff, accessible (authors may choose to end the embargo should their work be
2010), researchers are faced with (if nothing else) perceived pressures to published earlier). When employed accurately, preregistration has two
not only make a name for themselves in the field and succeed profes main advantages. First, it reduces undisclosed flexibility in data anal
sionally, but also to pay bills and put food on the table. In many cases, ysis, as it constrains researchers from deviating from their analysis plan,
such conditions are not conducive of honesty and integrity, which lie at which had been conceived before they saw the data. Although de
the core of scientific ethics (Steneck, 2006). Even more flagrant exam viations are sometimes unavoidable—as it is impossible to foresee all
ples of a financial incentive to publish, in direct contradiction to the potential issues with a dataset before its analysis—such deviations are
Disinterestedness norm, include known ‘cash for publication’ incentives publicly disclosed, and must be judged as justified during the peer-
to researchers, where institutions or governments reward researchers review process.5 Second, preregistration helps in reducing publication
solely for publishing in specific top journals (Quan et al., 2017) and the bias, as even if a given study does not end up being included in a pub
increase in ‘predatory journals’, which attempt to collect pay from re lished article, its registration will eventually become public. This allows
searchers desperate to publish their works in any journal outlet (Grud independent researchers to track unpublished works, and include them
niewicz et al., 2019). in independent analyses or meta-analyses of the literature (Nosek et al.,
After all, scientists are humans, and humans respond to the in 2018; Simmons et al., 2021). For researchers who have carefully
centives presented to them (O’Doherty, 2004). Even the most passionate considered their hypotheses, research design, data analysis plan, etc.,
scientists may soon realize that they are put at a disadvantage when time and efforts to preregister may not be cumbersome. Further, in such
operating in a system that may not value or reward their integrity and cases, preregistration can (in theory) facilitate data analysis by
methodological rigor. Worse, such individuals, who should be appreci providing a predefined analysis plan and “forcing” researchers to adhere
ated for upholding the tenets of science, may become disillusioned by to a clearly defined plan before collecting the actual data/ conducting
the process and be pushed out to other non-academic jobs. Meanwhile, the analysis. For guides on how to create a preregistration, see van’t
researchers affluent in QRPs, who followed the misaligned incentives Veer & Giner-Sorolla (2016) or the “Create a Preregistration” section of
dictated by the scientific publication system, are rewarded within this the OSF Website (Preregistration - OSF Support, 2023).
system. While the merits of preregistration are valiant, it is not a panacea. In
We argue that the incentive structures currently in place in academic addition to increasing the workload and time commitments on re
systems of scientific communication (i.e., journal outlets) are more to searchers (Krishna, 2021), preregistration might not always live up to
blame for the prevalence of QRPs in science, rather than researchers’ the promises of its proponents when put into practice (Pham & Oh,
moral compasses (or lack of ethical standards). The crushing weight of 2021a). Simply put, researchers might easily impart bias into the pre
the publish or perish system counteracts and dramatically hinders the registration process by preregistering many studies and only reporting
ability of researchers to succeed in discovering interesting, useful, and those that work out the way they intended. As such, the culture and
(crucially) replicable consumer insights. To this note, we take a top ideas surrounding preregistration can lead to a ‘badge of honor’ phe
down perspective on the systemic issues that have led to the replication nomenon where researchers simply use preregistration to signal the
crisis. As such, our primary recommendations in Section 4: The Pathway scientific rigor and virtue of their studies, even when this is not the case
Forward are focused on top down approaches to resolving these issues, (Pham & Oh, 2021b). Furthermore, although preregistrations eventually
suggesting that cultural reform should be driven by those who have become public, their discovery by other researchers (e.g., when per
power, the gatekeepers, and passed down to students and early career forming a meta-analysis) is not always trivial.
researchers. In sum, preregistration may reduce QRPs, as it forces researchers to
think carefully about and commit to a specific research plan before
3. Existing solutions seeing the data. However, it does not resolve the systemic issues that
have led to the replication crisis in science in the first place. Crucially, by
There is no single, universal solution to solving the cultural issues preregistering, researchers often end up investing more effort in any
that have led to pervasive publication bias and irreplicability in science. given study to engage in this process, without any guarantee that this
That said, many evolving solutions are taking shape and gaining traction effort will pay off, as a well-designed study with a null result likely will
across fields. The select solutions discussed in this section offer a variety not be granted the reward of publication in the current publishing
of approaches for working towards more robust scientific practices in environment. Thus, the aforementioned incentive misalignment prob
Marketing: some aim to solve publication bias with meta-analytical lem persists.
correction or by publishing null-findings, others attempt to prevent re Registered reports. A registered report is a publication format that
searchers from p-hacking, while others still seek to hold journals turns the conventional publication process on its head. When submitting
accountable by submitting all publications to a statistical verification a registered report, the authors write the paper, including their theory
algorithm (e.g., statcheck.io; Statcheck, 2023). and methods (with detailed study design and analyses) prior to con
Preregistration. At the time of writing, one of the most commonly ducting the research, and submit the manuscript for publication before
proposed and implemented approaches intended to reduce QRPs is the results are known. Proposals that are evaluated favorably receive an
‘in principle acceptance’, indicating that, should the researchers
3
Further, some institutions receive government research funds based on
4
performance-based allocations. Most practices used to improve replicability In studies using primary data, preregistration is typically done before data
focus on quality instead of quantity of research output, which may lead to fewer collection. When using secondary data, preregistration should be done before
publications, and potentially less funding if performance metrics emphasize analyzing the data.
5
quantity; this makes adoption of such practices a prisoner’s dilemma. For example, see Supplementary information of Aydogan et al. (2021).
3
S.D. Shaw and G. Nave Journal of Business Research 167 (2023) 114129
successfully execute the stated proposal, the journal will publish the replication studies can help the field of Marketing 1) understand the
findings regardless of statistical significance. During the review process, overall state of replicability in our field, and 2) determine which effects
editors and reviewers may ask for revisions (including changes in the replicate and which do not/ determine the nature of previously under
study design and analysis). However, acceptance decisions are made powered studies. Replication studies involve repeating a previously
before the results are known. After the authors have completed the published study with the goal of assessing the validity and generaliz
research, they submit the full manuscript including the results (high ability of its findings, often using significantly larger participant sam
lighting the original analysis and clearly indicating that any additional ples, more diverse participant samples, and multiple research labs to
analyses are exploratory) and discussion for compliance review. conduct the study. By replicating studies, researchers can confirm or
Crucially, editors cannot reject the manuscript on the second round re refute previous findings and identify potential errors or biases in the
view just because the results do not contain statistically significant original research. Some academic journals have started to incentivize
findings, or due to post-hoc concerns about the methods (Chambers, and prioritize replication studies by establishing dedicated sections for
2013). For a comprehensive guide to registered reports, see Chambers & replication research or offering publication bonuses for authors who
Tzavella, (2022). conduct and publish replication studies. For example, many journals (e.
Registered reports carry a promise to reduce QRPs and publication g., Journal of Experimental Psychology) have published special issues
bias, as journals commit to publishing research proposals only based on with articles devoted to replication/ confirmatory research. Similarly,
the importance of the research question at hand, the quality of the some journals now have designated sections devoted to replication,
experimental design, and the rigor of the proposed methods, rather than which encourage the submission of studies that attempt to replicate or
focusing on the statistical significance of the results. The authors are extend previous findings (e.g., Marketing Letters; Labroo et al., 2022).
thus incentivized to carefully craft their research hypotheses and Big team science can play a role in promoting replication studies by
execute the best possible study to test them, with the help of feedback pooling resources and expertise to conduct large-scale replication
from the review team, and the pre-approved study plan eliminates the studies. For example, the Many Labs project, led by researchers at the
incentive to engage in QRPs to obtain statistical significance. University of Virginia, brought together over 100 researchers from
Since the first offering of registered reports as a format of scientific around the world to replicate a set of psychological studies across
publishing in 2012 by the journals Cortex and Perspectives on Psycho multiple laboratories. This project helped to demonstrate the replica
logical Science, over 300 scientific journals now invite submissions in the bility of some findings and identify potential issues with others (Klein
format (Chambers & Tzavella, 2022). Early indications evidence that, et al., 2014, 2018). Doing a replication properly is a significant time
across fields, 60% of published registered reports include null results, investment, often requiring large financial resources and collaboration
which is approximately five times more than non-registered report with several labs/ research teams, and a great service to the field. For a
counterpart publications, and a finding even more pronounced for guide on conducting a thorough and effective replication study, see
publications in Psychology (Allen & Mehler, 2019; Scheel et al., 2021). Brandt et al. (2014).
Further, as judged by a panel of independent peer reviewers, registered Multi-verse analysis and specification curves. Multi-verse analysis
reports outperform non-registered reports on characteristics such as increases the transparency of the data processing and analysis stages of a
importance, novelty, creativity, innovation, methodological rigor, and research process by performing “all analyses across the whole set of
overall article quality (Soderberg et al., 2021), and receive a similar alternatively processed data sets corresponding to a large set of reasonable
number of citations as non-registered reports (Hummer et al., 2017). scenarios” (Steegen et al., 2016). A multiverse analysis recognizes that
Developing online research communities, unaffiliated with any journal, when preparing and analyzing data, researchers make a wide range of
such as Peer Community In Registered Reports (PCI RR; PCI Registered decisions (e.g., participant exclusion criteria, variable transformations,
Reports, 2023), bring together researchers interested in working on variable categorization, etc.) whose outcomes are potentially justifiable,
registered reports, facilitating free and transparent recommendations, providing fertile ground for potential QRPs. Rather than giving re
and may independently help the registered report publication format. searchers the flexibility to make these decisions, multiverse analysis
Data and materials sharing. When a process is fully transparent, forces researchers to consider all possible justifiable analyses (that is, all
observers no longer need to rely on trust to ensure that it was done possible combinations of design decision outcomes) and present them,
honestly and correctly. Rather, anyone can, if willing to put in the effort, to create alternative universes of analyses of the data at hand. Presenting
conduct due diligence themselves to determine the merits of the proc such a comprehensive set of analyses allows readers to evaluate the
ess—in line with all of the Mertonian norms, but particularly promoting degree to which any of the authors’ conclusions depend on specific
Communalism and Organized Skepticism. Historically, valiant re analytical choices made. This way, the results presented include a
searchers spent countless hours reaching out to their peers to obtain ‘multi-verse’ of possible results, rather than a singular analysis that may
information about studies and engage in such due diligence. Digging include the inherent biases and arbitrary choices made by researchers
behind the scenes of published work is a cumbersome and often fruitless during study design, which accumulate over the course of traditional
process (i.e., when the authors do not reply or share their data; Conlisk, research preparations. Specification curve analysis consists of similar
2011). Yet, looking back, these efforts were a great service to our col multiverse data processing steps, with the addition of specific data vi
lective scientific pursuits. With open science practices, or the trans sualizations and inferential statistical procedures, namely, a specifica
parent sharing of information behind our research, efforts to verify, tion plot (Simonsohn et al., 2019). A how-to guide for multi-verse and
understand, and better our research processes become easier. Open similar analyses can be found in Del Giudice & Gangestad (2021), or
science makes secondary analyses, code, data, and any other relevant here (Specification Curve Analysis: A Practical Guide, 2023), and there
information available to those who are interested. are R packages available for conducting multi-verse analysis (Multi
Data repositories, such as ResearchBox (ResearchBox, 2023) and verse: An R Package for Creating Multiverse Analysis, 2022).
re3data (Re3data.Org, 2023), offer relatively easy methods for re
searchers to store and share scientific content, such as data, code, pre 4. A path forward
registrations, and study materials related to their projects. When
researchers share their work openly, other scientists can build on this Thus far, we have outlined key issues known to exist in the current
and make their work more efficient, check papers for unforced errors, body of published social science literature and are expected to be found
attempt to replicate results or use information for a meta-analysis, and also in the Marketing literature, and surveyed the most prominent tools
delve into findings in more detail however necessary. For an overview of currently in use to address them. We now discuss a path forward, and
data sharing guidelines and best practices, see Alter & Gonzalez (2018). reflect on how we see our field resolving these issues. Broadly put, we
Replication studies. Of course, conducting and publishing recommend shifting the scope of solutions, from proximate methods that
4
S.D. Shaw and G. Nave Journal of Business Research 167 (2023) 114129
are implemented by individual researchers, to systemic solutions that Statement for Disclosure of Sample, Conditions, Measures, and Ex
realign the incentive structures of journals and researchers with the clusions, 2023).
tenets of the scientific method and promote the Mertonian norms of • Reviewers for a paper should be restricted in the number of citations
science among all. that they can suggest from their own work (we suggest a 3 citation
Cultural reform from the top down. To facilitate the creation of a maximum, but a zero tolerance policy could also be justified).
robust base of knowledge in Marketing, we must succeed in creating • Journal editors should be restricted from suggesting citations for
systems that align the incentives of individual researchers with the works published in the journal they are editing and from their own
values of science. Such reforms must come from within our field and be work.
carried out by those who hold power over the processes that dictate our
systems of work: department chairs, faculty involved in making hiring For faculty, we recommend:
and promotion decisions, society presidents, and journal editors. Put
simply, despite a clear yearning from early career researchers for reform, • Prioritize and appreciate candidates in hiring and promotion de
the personal risks and potential career implications to early career re cisions who have engaged in significant efforts to increase the
searchers are, unfortunately, too high for them to be expected to break replicability of their own research. For guidelines on best practices,
the norms of a long established system. Whereas many of society’s most see Gernsbacher (2018) or Moher et al. (2018).
profound cultural shifts come from grassroots, bottom up movements, • Offer internal rewards that incentivize researchers/ colleagues to
we argue that it is unreasonable to expect that this will be the case for preregister studies, submit replication reports, and engage in open
cultural reform in Marketing academia, where power is concentrated science practices. For examples, see the ‘Recognition and Rewards
among a small number of individuals, the so-called ‘gatekeepers’ in the Vision’ section of Utrecht University’s Open Science Guide
academic profession. (Utrecht University Recognition and Rewards Vision, 2023).
Leadership, take a stand. If scientific integrity and the pursuit of • Encourage graduate students to explore submitting registered re
knowledge is a common goal of our field, it is essential to recognize that ports early in their research career.
QRPs are not generally the result of malice or intent, but rather a product • Include, discuss, and value open science practices (e.g., data and
of human actors operating within a professional system. Leaders must material sharing) in tenure requirements.
take a stand and pave the way forward by creating policies that foster • Promote sustainable research jobs with better work life balance. For
rigorous science, adhere to the Mertonian norms, and disincentivize example, accommodate hybrid work from home schedules, family
QRPs. Essentially, those in power need to adjust the rules of the game to planning, and geographic constraints of researchers.
ensure that we incentivize better scientific practices, and that there is no • Make attempts to consider quality of research endeavors over
need for ‘hating the game’. Below, our recommendations address journal quantity (i.e., total citations count). For example, when evaluating
editors and faculty as the key stakeholders and leaders who hold the Faculty, ask for 3–5 exemplary publications to be considered, rather
power to implement systemic change in our field. than an entire body of work.
• Marketing journals should align with their counterparts in the basic The way that we conduct science is changing for the better, but there
disciplines of social science and add registered reports to the list of is much work yet to be done. In this paper, we have proposed policy
eligible publication types/ encourage researchers to submit manu changes that could shift the academic publishing culture and accelerate
scripts in a registered report format, following the guidelines set by the pathway forward towards reducing QRPs and rebuilding trust in our
Chambers & Tzavella, 2022. body of scientific findings. We see these changes as necessary, since trust
• Journals should take responsibility for the works that they publish; if in processes and systems that drive how we conduct scientific endeavors
a journal publishes an article, they must encourage further work is essential to building a culture of academic integrity and attracting and
evaluating its robustness and replicability, in line with the Mertonian retaining top researchers within the field of Marketing.
norm of organized skepticism.
• Journals should not shy away from publishing articles that contain CRediT authorship contribution statement
null or mixed results, and focus on the quality of the research
question and the appropriateness of methods aiming at answering Steven D. Shaw: Conceptualization, Writing – original draft,
them (Note: null results should be supported by equivalence tests or Writing – review & editing. Gideon Nave: Conceptualization, Writing –
Bayes factors, as the absence of evidence is not evidence of absence). review & editing.
• Decrease the emphasis and value of “statistically significant” results
(i.e., p-values lower than an arbitrary threshold; McShane et al., Declaration of Competing Interest
2019).
• At a minimum, articles with studies that are not pre-registered, or in The authors declare that they have no known competing financial
the registered report format should contain a multi-verse or specifi interests or personal relationships that could have appeared to influence
cation curve analysis. the work reported in this paper.
• Implement/ phase in increasing Transparency and Openness
Promotion (TOP) factor based on TOP guidelines for data citation, Acknowledgements
data transparency, analysis code transparency, materials trans
parency, design & analysis reporting guidelines, study preregistra We thank the editorial team and reviewers for their time and
tion, analysis plan preregistration, and replication (TOP Factor - thoughtful comments; this work was improved as a result of your efforts.
Transparency and Openness Promotion, 2023).
• Listen to and support authors and reviewers who encourage pre References
registering their studies or registered report submissions.
• Use the OSF “Standard Reviewer Statement for Disclosure of Sample, Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early
Conditions, Measures, and Exclusions” to request better data prac career and beyond. PLOS Biology, 17(5), e3000246.
Alter, G., & Gonzalez, R. (2018). Responsible Practices for Data Sharing. The American
tices and information from authors (OSF | Standard Reviewer Psychologist, 73(2), 146. https://doi.org/10.1037/AMP0000258
AsPredicted. (2023). Retrieved December 21, 2022, from https://aspredicted.org/.
5
S.D. Shaw and G. Nave Journal of Business Research 167 (2023) 114129
Aydogan, G., Daviet, R., Karlsson Linnér, R., Hare, T. A., Kable, J. W., Kranzler, H. R., … Labroo, A. A., Mizik, N., Winer, R. S., Bradlow, E. T., Huber, J., Lehmann, D., Lynch, J.
Nave, G. (2021). Genetic underpinnings of risky behaviour relate to altered G., & Lehmann, D. R. (2022). Marketing Letters encourages submissions to
neuroanatomy. Nature Human Behaviour, 5(6), 787–794. https://doi.org/10.1038/ Replication Corner. Marketing Letters 2022 33:4, 33(4), 543–543. https://doi.org/
s41562-020-01027-y 10.1007/S11002-022-09653-4.
Berman, R., Pekelis, L., Scott, A., & Van den Bulte, C. (2018). P-Hacking in A/B Testing. Lane, A., Luminet, O., Nave, G., & Mikolajczak, M. (2016). Is there a Publication Bias in
SSRN Electronic Journal. https://doi.org/10.2139/SSRN.3204791 Behavioural Intranasal Oxytocin Research on Humans? Opening the File Drawer of
Brandt, M. J., IJzerman, H., Dijksterhuis, A., Farach, F. J., Geller, J., Giner-Sorolla, R., One Laboratory. Journal of Neuroendocrinology, 28(4). https://doi.org/10.1111/
Grange, J. A., Perugini, M., Spies, J. R., & van ’t Veer, A. (2014). The Replication JNE.12384
Recipe: What makes for a convincing replication? Journal of Experimental Social Lilienfeld, S. O. (2017). Psychology’s Replication Crisis and the Grant Culture: Righting
Psychology, 50(1), 217–224. https://doi.org/10.1016/J.JESP.2013.10.005. the Ship. Https://Doi.Org/10.1177/1745691616687745, 12(4), 660–664. https://
Brodeur, A., Cook, N., & Heyes, A. (2022). We Need to Talk about Mechanical Turk: What doi.org/10.1177/1745691616687745
22,989 Hypothesis Tests Tell us about p-Hacking and Publication Bias in Online McShane, B. B., Gal, D., Gelman, A., Robert, C., & Tackett, J. L. (2019). Abandon
Experiments. https://www.econstor.eu/handle/10419/266266. Statistical Significance. Https://Doi.Org/10.1080/00031305.2018.1527253, 73
Button, K. S., Ioannidis, J. P. A., Mokrysz, C., Nosek, B. A., Flint, J., Robinson, E. S. J., (sup1), 235–245. https://doi.org/10.1080/00031305.2018.1527253.
et al. (2013). Power failure: Why small sample size undermines the reliability of Moher, D., Naudet, F., Cristea, I. A., Miedema, F., Ioannidis, J. P. A., & Goodman, S. N.
neuroscience. Nature Reviews Neuroscience, 14(5), 365–376. https://doi.org/ (2018). Assessing scientists for hiring, promotion, and tenure. PLOS Biology, 16(3),
10.1038/NRN3475 e2004089.
Camerer, C. F., Dreber, A., Forsell, E., Ho, T. H., Huber, J., Johannesson, M., et al. (2016). Multiverse: An R package for creating multiverse analysis. (n.d.). Retrieved December 27,
Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 2022, from https://cran.r-project.org/web/packages/multiverse/readme/README.
1433–1436. html.
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Johannesson, M., … Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., et al.
Wu, H. (2018). Evaluating the replicability of social science experiments in Nature (2015). Promoting an open research culture. Science, 348(6242), 1422–1425.
and Science between 2010 and 2015. Nature Human Behaviour, 2(9), 637–644. https://doi.org/10.1126/SCIENCE.AAB2374
https://doi.org/10.1038/s41562-018-0399-z Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration
Chambers, C. D. (2013). Registered Reports: A new publishing initiative at Cortex. revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606.
Cortex, 49(3), 609–610. https://doi.org/10.1016/J.CORTEX.2012.12.016 https://doi.org/10.1073/PNAS.1708274114
Chambers, C. D., & Tzavella, L. (2022). The past, present and future of Registered Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., Fidler,
Reports. Nature Human Behaviour, 6(1), 29–42. https://doi.org/10.1038/S41562- F., Hilgard, J., Kline Struhl, M., Nuijten, M. grave le B., Rohrer, J. M., Romero, F.,
021-01193-7 Scheel, A. M., Scherer, L. D., Schönbrodt, F. D., & Vazire, S. (2022). Replicability,
Conlisk, J. (2011). Professor Zak’s empirical studies on trust and oxytocin. Journal of Robustness, and Reproducibility in Psychological Science. Https://Doi.Org/10.1146/
Economic Behavior & Organization, 78(1–2), 160–166. https://doi.org/10.1016/J. Annurev-Psych-020821-114157, 73, 719–748. https://doi.org/10.1146/ANNUREV-
JEBO.2011.01.002 PSYCH-020821-114157.
de Vrieze, J. (2021). Large survey finds questionable research practices are common. Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M.
Science, 373(6552), 265. https://doi.org/10.1126/SCIENCE.373.6552.265 (2016). The prevalence of statistical reporting errors in psychology (1985–2013).
Del Giudice, M., & Gangestad, S. W. (2021). A Traveler’s Guide to the Multiverse: Behavior Research Methods, 48(4), 1205–1226. https://doi.org/10.3758/S13428-
Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions. 015-0664-2/TABLES/8
Advances in Methods and Practices. Psychological Science, 4(1). https://doi.org/ O’Doherty, J. P. (2004). Reward representations and reward-related learning in the
10.1177/2515245920954925/ASSET/IMAGES/LARGE/10.1177_ human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14(6),
2515245920954925-FIG6.JPEG 769–776. https://doi.org/10.1016/J.CONB.2004.10.016
Ebersole, C. R., Atherton, O. E., Belanger, A. L., Skulborstad, H. M., Allen, J. M., Open Science Collaboration. (2015). Estimating the reproducibility of psychological
Banks, J. B., et al. (2016). Many Labs 3: Evaluating participant pool quality across science. Science, 349(6251). https://doi.org/10.1126/SCIENCE.AAC4716/SUPPL_
the academic semester via replication. Journal of Experimental Social Psychology, 67, FILE/AARTS-SM.PDF
68–82. https://doi.org/10.1016/J.JESP.2015.10.012 OSF. (2023). Retrieved December 21, 2022, from https://osf.io/.
Gaff, J. G. (2010). Preparing Future Faculty and Doctoral Education. Http://Dx.Doi.Org/ OSF | Standard. (2023) Reviewer Statement for Disclosure of Sample, Conditions, Measures,
10.1080/00091380209605571, 34(6), 63–66. https://doi.org/10.1080/ and Exclusions. Retrieved April 13, 2023, from https://osf.io/hadz3/.
00091380209605571 Pham, M. T., & Oh, T. T. (2021a). On Not Confusing the Tree of Trustworthy Statistics
Gernsbacher, M. A. (2018). Rewarding Research Transparency. Trends in Cognitive with the Greater Forest of Good Science: A Comment on Simmons et al’.s Perspective
Sciences, 22(11), 953–956. https://doi.org/10.1016/J.TICS.2018.07.002 on Pre-registration. Journal of Consumer Psychology, 31(1), 181–185. https://doi.org/
Grudniewicz, A., Moher, D., Cobey, K. D., Bryson, G. L., Cukier, S., Allen, K., Ardern, C., 10.1002/JCPY.1213
Balcom, L., Barros, T., Berger, M., Ciro, J. B., Cugusi, L., Donaldson, M. R., Egger, M., Pham, M. T., & Oh, T. T. (2021b). Preregistration Is Neither Sufficient nor Necessary for
Graham, I. D., Hodgkinson, M., Khan, K. M., Mabizela, M., Manca, A., … Lalu, M. M. Good Science. Journal of Consumer Psychology, 31(1), 163–176. https://doi.org/
(2019). Predatory journals: No definition, no defence. Nature 2021 576:7786, 576 10.1002/JCPY.1209
(7786), 210–212. https://doi.org/10.1038/d41586-019-03759-y. Preregistration—OSF Support. (2023). Retrieved April 14, 2023, from https://help.osf.
Hagger, M. S., Chatzisarantis, N. L. D., Alberts, H., Anggono, C. O., Batailler, C., io/article/145-preregistration#creating-a-preregistration-on-osf.
Birt, A. R., et al. (2016). A Multilab Preregistered Replication of the Ego-Depletion Quan, W., Chen, B., & Shu, F. (2017). Publish or impoverish: An investigation of the
Effect. Perspectives on Psychological Science, 11(4), 546–573. https://doi.org/ monetary reward system of science in China (1999–2016). Aslib Journal of
10.1177/1745691616652873/ASSET/IMAGES/LARGE/10.1177_ Information Management, 69(5), 486–502. https://doi.org/10.1108/AJIM-01-2017-
1745691616652873-FIG1.JPEG 0014
Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The Extent Ranehill, E., Dreber, A., Johannesson, M., Leiberg, S., Sul, S., & Weber, R. A. (2015).
and Consequences of P-Hacking in Science.. https://doi.org/10.1371/journal. Assessing the Robustness of Power Posing: No Effect on Hormones and Risk
pbio.1002106 Tolerance in a Large Sample of Men and Women. Psychological Science, 26(5),
Hummer, L., Thorn, F. S., Nosek, B. A., & Errington, T. M. (2017). Evaluating Registered 653–656. https://doi.org/10.1177/0956797614553946/ASSET/
Reports: A Naturalistic Comparative Study of Article Impact. https://doi.org/10.31219/ 0956797614553946.FP.PNG_V03
OSF.IO/5Y8W7. PCI Registered Reports. (2023). Retrieved April 11, 2023, from https://rr.peercommunit
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLOS yin.org/.
Medicine, 2(8), e124. Re3data.org. (2023). Retrieved April 11, 2023, from https://www.re3data.org/.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of ResearchBox. (2023). Retrieved April 11, 2023, from https://researchbox.org/.
Questionable Research Practices With Incentives for Truth Telling. Psychological Ritchie, S. (Stuart J.). (2020). Science fictions: Exposing fraud, bias, negligence and hype in
Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953 science. 353.
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological
Social Psychology Review, 2(3), 196–217. https://doi.org/10.1207/ Bulletin, 86(3), 638–641. https://doi.org/10.1037/0033-2909.86.3.638
S15327957PSPR0203_4 Scheel, A. M., Schijen, M. R. M. J., & Lakens, D. (2021). An Excess of Positive Results:
Klein, R. A., Cook, C. L., Ebersole, C. R., Vitiello, C., Nosek, B. A., Hilgard, J., … Comparing the Standard Psychology Literature With Registered Reports. Https://Doi.
Ratliff, K. A. (2022). Many Labs 4: Failure to Replicate Mortality Salience Effect With Org/10.1177/25152459211007467, 4(2). https://doi.org/10.1177/25152459
and Without Original Author Involvement. Collabra: Psychology, 8(1). https://doi. 211007467.
org/10.1525/COLLABRA.35271 Serra-Garcia, M., & Gneezy, U. (2021). Nonreplicable publications are cited more than
Klein, R. A., Ratliff, K. A., Vianello, M., Adams, R. B., Bahník, Š., Bernstein, M. J., et al. replicable ones. Sci. Adv., 7(21). https://doi.org/10.1126/SCIADV.ABD1705/
(2014). Investigating variation in replicability: A “many labs” replication project. SUPPL_FILE/SCIADV.ABD1705_SM.PDF
Social Psychology, 45(3), 142. https://doi.org/10.1027/1864-9335/A000178 Shalvi, S., Dana, J., Handgraaf, M. J. J., De Dreu, C. K. W., Shalvi, S., Dana, J., et al.
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Alper, S., … (2011). Justified ethicality: Observing desired counterfactuals modifies ethical
Nosek, B. A. (2018). Many Labs 2: Investigating Variation in Replicability Across perceptions and behavior. Organizational Behavior and Human Decision Processes, 115
Samples and Settings. Advances in Methods and Practices in Psychological Science, 1(4), (2), 181–190. https://doi.org/10.1016/j.obhdp.2011.02.001
443–490. https://doi.org/10.1177/2515245918810225 Shrout, P. E., & Rodgers, J. L. (2018). Psychology, Science, and Knowledge Construction:
Krishna, A. (2021). The Need for Synergy in Academic Policies: An Introduction to the Broadening Perspectives from the Replication Crisis. Https://Doi.Org/10.1146/
Dialogue on Pre-registration. Journal of Consumer Psychology, 31(1), 146–150. Annurev-Psych-122216-011845, 69, 487–510. https://doi.org/10.1146/ANNUREV-
https://doi.org/10.1002/JCPY.1211 PSYCH-122216-011845.
6
S.D. Shaw and G. Nave Journal of Business Research 167 (2023) 114129
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge,
Undisclosed flexibility in data collection and analysis allows presenting anything as and future directions. Science and Engineering Ethics, 12(1), 53–74. https://doi.org/
significant. Psychological Science, 22(11), 1359–1366. https://doi.org/10.1177/ 10.1007/PL00022268
0956797611417632 Sterling, T. D. (1959). Publication Decisions and Their Possible Effects on Inferences
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2021). Pre-registration is a Game Drawn from Tests of Significance—Or Vice Versa. Journal of the American Statistical
Changer. But, Like Random Assignment, it is Neither Necessary Nor Sufficient for Association, 54(285), 30. https://doi.org/10.2307/2282137
Credible Science. Journal of Consumer Psychology, 31(1), 177–180. https://doi.org/ TOP Factor—Transparency and Openness Promotion. (2023). Retrieved April 14, 2023,
10.1002/JCPY.1207 from https://topfactor.org/summary.
Simonsohn, U., Simmons, J. P., & Nelson, L. D. (2019). Specification Curve: Descriptive Utrecht University Recognition and Rewards Vision. (2023). Retrieved April 11, 2023
and Inferential Statistics on All Reasonable Specifications. SSRN Electronic Journal. from https://www.uu.nl/sites/default/files/UU-Recognition-and-Rewards-Vision.pd
https://doi.org/10.2139/SSRN.2694998 f.
Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J., Thorn, F. S., Vazire, S., van ’t Veer, A. E., & Giner-Sorolla, R. (2016). Pre-registration in social psychology—A
et al. (2021). Initial evidence of research quality of registered reports compared with discussion and suggested template. Journal of Experimental Social Psychology, 67,
the standard publishing model. Nature Human Behaviour, 5(8), 990–997. https://doi. 2–12. https://doi.org/10.1016/J.JESP.2016.03.004.
org/10.1038/S41562-021-01142-4 Watts, T. W., Duncan, G. J., & Quan, H. (2018). Revisiting the Marshmallow Test: A
Specification Curve Analysis: A practical guide. (2022). Retrieved April 14, 2023, from Conceptual Replication Investigating Links Between Early Delay of Gratification and
https://dcosme.github. Later Outcomes. Psychological Science, 29(7), 1159–1177. https://doi.org/10.1177
io/specification-curves/SCA_tutorial_inferential_presentation#1. /0956797618761661.
Statcheck // web. (2023). Retrieved April 11, 2023, from https://michelenuijten.sh Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., &
inyapps.io/statcheck-web/. van Assen, M. A. L. M. (2016). Degrees of freedom in planning, running, analyzing,
Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing and reporting psychological studies: A checklist to avoid P-hacking. Frontiers in
Transparency Through a Multiverse Analysis. Perspectives on Psychological Science, 11 Psychology, 7(NOV). https://doi.org/10.3389/FPSYG.2016.01832/ABSTRACT
(5), 702–712. https://doi.org/10.1177/1745691616658637/ASSET/IMAGES/ Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of Research Misconduct and
LARGE/10.1177_1745691616658637-FIG2.JPEG Questionable Research Practices: A Systematic Review and Meta-Analysis. Science
and Engineering Ethics, 27(4), 41. https://doi.org/10.1007/S11948-021-00314-9/
METRICS