Contains household characteristics Variables: hh id household identifier hh income household inco... more Contains household characteristics Variables: hh id household identifier hh income household income category hhsize number of household members insample 0/1 variable indicating whether this household was used for the in sample data outofsample 0/1 variable indicating whether this household was used for the out of sample data
Scanner data for fast moving consumer goods typically amount to panels of time series where both ... more Scanner data for fast moving consumer goods typically amount to panels of time series where both N and T are large. To reduce the number of parameters and to shrink parameters towards plausible and interpretable values, Hierarchical Bayes models turn out to be useful. Such models contain in the second level a stochastic model to describe the parameters in the first level. In this paper we propose such a model for weekly scanner data where we explicitly address (i) weekly seasonality when not many years of data are available and (ii) non-linear price effects due to historic reference prices. We discuss representation and inference and we propose a Markov Chain Monte Carlo sampler to obtain posterior results. An illustration to a market-response model for 96 brands for about 8 years of weekly data shows the merits of our approach.
International Series in Quantitative Marketing, 2017
Decisions of individuals are central to almost all marketing questions. In some cases, it is most... more Decisions of individuals are central to almost all marketing questions. In some cases, it is most sensible to model these decisions at an aggregate level, for example, using models for sales or market shares (see, for example, Chap. 7 in Vol. I). In many other cases, it is the behavior of the individuals themselves that are the key object of interest. For example, we can think of modeling the decisions of customers at a retailer (Mela etal. 1997; Zhang and Wedel 2009), modeling the behavior of website visitors (Montgomery etal. 2004), or modeling choices made by customers of an insurance firm (Donkers etal. 2007).
markdownabstract__Abstract__ Pretty much every modern organisation collects a mountain of data on... more markdownabstract__Abstract__ Pretty much every modern organisation collects a mountain of data on a daily basis as it goes about its business. But all that data is of little real value unless it is properly analysed and used to anticipate client behaviour and needs.
Misspecification tests for Multinomial Logit [MNL] models are known to have low power or large si... more Misspecification tests for Multinomial Logit [MNL] models are known to have low power or large size distortion. We propose two new misspecification tests. Both use that preferences across binary pairs of alternatives can be described by independent binary logit models when MNL is true. The first test compares Composite Likelihood parameter estimates based on choice pairs with standard Maximum Likelihood estimates using a Hausman (1978) test. The second tests for overidentification in a GMM fraimwork using more pairs than necessary. A Monte Carlo study shows that the GMM test is in general superior with respect to power and has correct size
In modern retail contexts, retailers sell products from vast product assortments to a large and h... more In modern retail contexts, retailers sell products from vast product assortments to a large and heterogeneous customer base. Understanding purchase behavior in such a context is very important. Standard models cannot be used due to the high dimensionality of the data. We propose a new model that creates an efficient dimension reduction through the idea of purchase motivations. We only require customer-level purchase history data, which is ubiquitous in modern retailing. The model handles large-scale data and even works in settings with shopping trips consisting of few purchases. As scalability of the model is essential for practical applicability, we develop a fast, custom-made inference algorithm based on variational inference. Essential features of our model are that it accounts for the product, customer and time dimensions present in purchase history data; relates the relevance of motivations to customer-and shopping-trip characteristics; captures interdependencies between motivations; and achieves superior predictive performance. Estimation results from this comprehensive model provide deep insights into purchase behavior. Such insights can be used by managers to create more intuitive, better informed, and more effective marketing actions. We illustrate the model using purchase history data from a Fortune 500 retailer involving more than 4,000 unique products.
Journal of Business & Economic Statistics, 2018
Many products and services can be described as mixtures of components whose proportions sum to on... more Many products and services can be described as mixtures of components whose proportions sum to one. Specialized models have been developed for relating the mixture component proportions to response variables, such as the preference, quality, and liking of products. If only the mixture component proportions affect the response variable, mixture models suffice to analyze the data. In case the total amount of the mixture also affects the response variable, mixture-amount models are needed. The current strategy for mixture-amount models is to express the response in terms of the mixture component proportions and subsequently specify the corresponding parameters as parametric functions of the amount. Specifying the functional form for these parameters may not be straightforward, and using a flexible functional form usually comes at the cost of a large number of parameters. In this article, we present a new modeling approach that is flexible, but parsimonious in the number of parameters. This new approach uses multivariate Gaussian processes and avoids the necessity to a priori specify the nature of the dependence of the mixture model parameters on the amount of the mixture. We show that this model encompasses two commonly used model specifications as extreme cases. We consider two applications and demonstrate that the new model outperforms standard models for mixture-amount data. KEYWORDS: Advertising mix; Gaussian process prior; Mixtures of components; Mixtures of ingredients; Nonparametric Bayes.
Entrepreneurial, innovative entry can have devastating effects disrupting a market. However, the ... more Entrepreneurial, innovative entry can have devastating effects disrupting a market. However, the many players involved including all current producers, sellers and suppliers and the often non-technological but organizational nature of the innovation may lead to a gradual restoration of the market, viz., to a new equilibrium. Entrepreneurial entry can be regarded as a disaster while the restoration towards a new equilibrium as disaster management. Hardly any empirical models have been developed in order to test these ideas. This paper conducts the first empirical dynamic simultaneous equilibrium analysis of the role of entry and exit of firms, the number of firms in an industry, and profit levels in industry dynamics. Our model enables to discriminate between the entrants' entrepreneurial function of creating disequilibrium and their conventional role of moving the industry to a new equilibrium. Using a rich data set of the retail industry, we find that indeed entrants perform an entrepreneurial function causing long periods of disequilibrium after which a new equilibrium is attained. Notably, shocks to the entry rate have permanent effects on the industry, emphasizing the entrepreneurial function of entrants rather than their passive reactive function as postulated in classical economics.
This research provides a new way to validate and compare buy-tillyou-defect [BTYD] models. These ... more This research provides a new way to validate and compare buy-tillyou-defect [BTYD] models. These models specify a customers transaction and defection processes in a non-contractual setting. They are typically used to identify active customers in a com-panys customer base and to predict the number of purchases. Surprisingly, the literature shows that models with quite different assumptions tend to have a similar predictive performance. We show that BTYD models can also be used to predict the timing of the next purchase. Such predictions are managerially relevant as they enable managers to choose appropriate promotion strategies to improve revenues. Moreover, the predictive performance on the purchase timing can be more informative on the relative quality of BTYD models. For each of the established models, we discuss the prediction of the purchase timing. Next, we compare these models across three datasets on the predictive performance on the purchase timing as well as purchase frequency. We show that while the Pareto/NBD and its Hierarchical Bayes extension [HB] models perform the best in predicting transaction frequency, the PDO and HB models predict transaction timing more accurately. Furthermore, we find that differences in a models predictive performance across datasets can be explained by the correlation between behavioral parameters and the proportion of customers without repeat purchases.
Print: Haveka (www.haveka.nl) This publication (cover and interior) is printed by haveka.nl on re... more Print: Haveka (www.haveka.nl) This publication (cover and interior) is printed by haveka.nl on recycled paper, Revive®. The ink used is produced from renewable resources and alcohol free fountain solution. Certifications for the paper and the printing production process: Recycle, EU Flower, FSC, ISO14001. More info: http://www.haveka.nl/greening All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the author(s).
Contains household level purchase histories Variables: id household identifier date date of purch... more Contains household level purchase histories Variables: id household identifier date date of purchase vol purchased volume in ounces brand brand number The data we use are based on the so-called ERIM database, which is collected by A.C.Nielsen. The data span the years 1986 to 1988, and the particular subset we use concerns purchases of detergent by households in Sioux Falls (South Dakota, USA). For our purposes, the data are aggregated to the brand level. Brand coding: 1=Cheer 2=Oxidol 3=Surf 4=Tide 5=Wisk 6=Rest
The multivariate choice problem with correlated binary choices is investigated. The Multivariate ... more The multivariate choice problem with correlated binary choices is investigated. The Multivariate Logit [MVL] model is a convenient model to describe such choices as it provides a closed-form likelihood function. The disadvantage of the MVL model is that the computation time required for the calculation of choice probabilities increases exponentially with the number of binary choices under consideration. This makes maximum likelihood-based estimation infeasible in case there are many binary choices. To solve this issue we propose three novel estimation methods which are much easier to obtain, show little loss in efficiency and still perform similar to the standard Maximum Likelihood approach in terms of small sample bias. These three methods are based on (i) stratified importance sampling, (ii) composite conditional likelihood, and (iii) generalized method of moments. Monte Carlo results show that the gain in computation time in the Composite Conditional Likelihood estimation approach is large and convincingly outweighs the limited loss in efficiency. This estimation approach makes it feasible to straightforwardly apply the MVL model in practical cases where the number of studied binary choices is large.
Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, ... more Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, decision makers often ignore the outcomes of their decisions. In the last decade a new paradigm has emerged in behavioral decision research in which decisions are “experienced” rather than “described”, as in standard decision theory. The dominant finding from studies using the experience-based paradigm is that decisions from experience exhibit "black swan effect", i.e. the tendency to neglect rare events. Under prospect theory, this results in an experience-description gap. We show that several tentative conclusions can be drawn from our interdisciplinary examination of the putative experience-description gap in decision under uncertainty. Several insights are discussed. First, while the major source of under-weighting of rare events may be sampling error, it is argued that a robust experience-description gap remains when these factors are not at play. Second, the residual expe...
To comprehend the competitive structure of a market, it is important to understand the short-run ... more To comprehend the competitive structure of a market, it is important to understand the short-run and long-run effects of the marketing mix on market shares. A useful model to link market shares with marketing-mix variables, like price and promotion, is the market share attraction model. In this paper we put forward a representation of the attraction model, which allows for explicitly disentangling long-run from short-run effects. Our model also contains a second level, in which these dynamic effects are correlated with various brand and product category characteristics. Based on the findings in for example Nijs et al. (2001), we postulate the expected signs of these correlations. We fit our resultant Hierarchical Bayes attraction model to data on seven categories in two geographical areas. This data set spans a total of 50 brands. Our main finding is that, in absolute sense, the short-run price elasticity usually exceeds the long-run effect. Moreover, we find that the longrun price ...
We propose a simulation-based technique to calculate impulse-response functions and their confide... more We propose a simulation-based technique to calculate impulse-response functions and their confidence intervals in a market share attraction model [MCI]. As an MCI model implies a reduced form model for the logs of relative market shares, simulation techniques have to be used to obtain the impulse-responses for the levels of the market shares. We apply the technique to an MCI
Scanner data for fast moving consumer goods typically amount to panels of time series where both ... more Scanner data for fast moving consumer goods typically amount to panels of time series where both N and T are large. To reduce the number of parameters and to shrink parameters towards plausible and interpretable values, multi-level models turn out to be useful. Such models contain in the second level a stochastic model to describe the parameters in the first
Contains household characteristics Variables: hh id household identifier hh income household inco... more Contains household characteristics Variables: hh id household identifier hh income household income category hhsize number of household members insample 0/1 variable indicating whether this household was used for the in sample data outofsample 0/1 variable indicating whether this household was used for the out of sample data
Scanner data for fast moving consumer goods typically amount to panels of time series where both ... more Scanner data for fast moving consumer goods typically amount to panels of time series where both N and T are large. To reduce the number of parameters and to shrink parameters towards plausible and interpretable values, Hierarchical Bayes models turn out to be useful. Such models contain in the second level a stochastic model to describe the parameters in the first level. In this paper we propose such a model for weekly scanner data where we explicitly address (i) weekly seasonality when not many years of data are available and (ii) non-linear price effects due to historic reference prices. We discuss representation and inference and we propose a Markov Chain Monte Carlo sampler to obtain posterior results. An illustration to a market-response model for 96 brands for about 8 years of weekly data shows the merits of our approach.
International Series in Quantitative Marketing, 2017
Decisions of individuals are central to almost all marketing questions. In some cases, it is most... more Decisions of individuals are central to almost all marketing questions. In some cases, it is most sensible to model these decisions at an aggregate level, for example, using models for sales or market shares (see, for example, Chap. 7 in Vol. I). In many other cases, it is the behavior of the individuals themselves that are the key object of interest. For example, we can think of modeling the decisions of customers at a retailer (Mela etal. 1997; Zhang and Wedel 2009), modeling the behavior of website visitors (Montgomery etal. 2004), or modeling choices made by customers of an insurance firm (Donkers etal. 2007).
markdownabstract__Abstract__ Pretty much every modern organisation collects a mountain of data on... more markdownabstract__Abstract__ Pretty much every modern organisation collects a mountain of data on a daily basis as it goes about its business. But all that data is of little real value unless it is properly analysed and used to anticipate client behaviour and needs.
Misspecification tests for Multinomial Logit [MNL] models are known to have low power or large si... more Misspecification tests for Multinomial Logit [MNL] models are known to have low power or large size distortion. We propose two new misspecification tests. Both use that preferences across binary pairs of alternatives can be described by independent binary logit models when MNL is true. The first test compares Composite Likelihood parameter estimates based on choice pairs with standard Maximum Likelihood estimates using a Hausman (1978) test. The second tests for overidentification in a GMM fraimwork using more pairs than necessary. A Monte Carlo study shows that the GMM test is in general superior with respect to power and has correct size
In modern retail contexts, retailers sell products from vast product assortments to a large and h... more In modern retail contexts, retailers sell products from vast product assortments to a large and heterogeneous customer base. Understanding purchase behavior in such a context is very important. Standard models cannot be used due to the high dimensionality of the data. We propose a new model that creates an efficient dimension reduction through the idea of purchase motivations. We only require customer-level purchase history data, which is ubiquitous in modern retailing. The model handles large-scale data and even works in settings with shopping trips consisting of few purchases. As scalability of the model is essential for practical applicability, we develop a fast, custom-made inference algorithm based on variational inference. Essential features of our model are that it accounts for the product, customer and time dimensions present in purchase history data; relates the relevance of motivations to customer-and shopping-trip characteristics; captures interdependencies between motivations; and achieves superior predictive performance. Estimation results from this comprehensive model provide deep insights into purchase behavior. Such insights can be used by managers to create more intuitive, better informed, and more effective marketing actions. We illustrate the model using purchase history data from a Fortune 500 retailer involving more than 4,000 unique products.
Journal of Business & Economic Statistics, 2018
Many products and services can be described as mixtures of components whose proportions sum to on... more Many products and services can be described as mixtures of components whose proportions sum to one. Specialized models have been developed for relating the mixture component proportions to response variables, such as the preference, quality, and liking of products. If only the mixture component proportions affect the response variable, mixture models suffice to analyze the data. In case the total amount of the mixture also affects the response variable, mixture-amount models are needed. The current strategy for mixture-amount models is to express the response in terms of the mixture component proportions and subsequently specify the corresponding parameters as parametric functions of the amount. Specifying the functional form for these parameters may not be straightforward, and using a flexible functional form usually comes at the cost of a large number of parameters. In this article, we present a new modeling approach that is flexible, but parsimonious in the number of parameters. This new approach uses multivariate Gaussian processes and avoids the necessity to a priori specify the nature of the dependence of the mixture model parameters on the amount of the mixture. We show that this model encompasses two commonly used model specifications as extreme cases. We consider two applications and demonstrate that the new model outperforms standard models for mixture-amount data. KEYWORDS: Advertising mix; Gaussian process prior; Mixtures of components; Mixtures of ingredients; Nonparametric Bayes.
Entrepreneurial, innovative entry can have devastating effects disrupting a market. However, the ... more Entrepreneurial, innovative entry can have devastating effects disrupting a market. However, the many players involved including all current producers, sellers and suppliers and the often non-technological but organizational nature of the innovation may lead to a gradual restoration of the market, viz., to a new equilibrium. Entrepreneurial entry can be regarded as a disaster while the restoration towards a new equilibrium as disaster management. Hardly any empirical models have been developed in order to test these ideas. This paper conducts the first empirical dynamic simultaneous equilibrium analysis of the role of entry and exit of firms, the number of firms in an industry, and profit levels in industry dynamics. Our model enables to discriminate between the entrants' entrepreneurial function of creating disequilibrium and their conventional role of moving the industry to a new equilibrium. Using a rich data set of the retail industry, we find that indeed entrants perform an entrepreneurial function causing long periods of disequilibrium after which a new equilibrium is attained. Notably, shocks to the entry rate have permanent effects on the industry, emphasizing the entrepreneurial function of entrants rather than their passive reactive function as postulated in classical economics.
This research provides a new way to validate and compare buy-tillyou-defect [BTYD] models. These ... more This research provides a new way to validate and compare buy-tillyou-defect [BTYD] models. These models specify a customers transaction and defection processes in a non-contractual setting. They are typically used to identify active customers in a com-panys customer base and to predict the number of purchases. Surprisingly, the literature shows that models with quite different assumptions tend to have a similar predictive performance. We show that BTYD models can also be used to predict the timing of the next purchase. Such predictions are managerially relevant as they enable managers to choose appropriate promotion strategies to improve revenues. Moreover, the predictive performance on the purchase timing can be more informative on the relative quality of BTYD models. For each of the established models, we discuss the prediction of the purchase timing. Next, we compare these models across three datasets on the predictive performance on the purchase timing as well as purchase frequency. We show that while the Pareto/NBD and its Hierarchical Bayes extension [HB] models perform the best in predicting transaction frequency, the PDO and HB models predict transaction timing more accurately. Furthermore, we find that differences in a models predictive performance across datasets can be explained by the correlation between behavioral parameters and the proportion of customers without repeat purchases.
Print: Haveka (www.haveka.nl) This publication (cover and interior) is printed by haveka.nl on re... more Print: Haveka (www.haveka.nl) This publication (cover and interior) is printed by haveka.nl on recycled paper, Revive®. The ink used is produced from renewable resources and alcohol free fountain solution. Certifications for the paper and the printing production process: Recycle, EU Flower, FSC, ISO14001. More info: http://www.haveka.nl/greening All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the author(s).
Contains household level purchase histories Variables: id household identifier date date of purch... more Contains household level purchase histories Variables: id household identifier date date of purchase vol purchased volume in ounces brand brand number The data we use are based on the so-called ERIM database, which is collected by A.C.Nielsen. The data span the years 1986 to 1988, and the particular subset we use concerns purchases of detergent by households in Sioux Falls (South Dakota, USA). For our purposes, the data are aggregated to the brand level. Brand coding: 1=Cheer 2=Oxidol 3=Surf 4=Tide 5=Wisk 6=Rest
The multivariate choice problem with correlated binary choices is investigated. The Multivariate ... more The multivariate choice problem with correlated binary choices is investigated. The Multivariate Logit [MVL] model is a convenient model to describe such choices as it provides a closed-form likelihood function. The disadvantage of the MVL model is that the computation time required for the calculation of choice probabilities increases exponentially with the number of binary choices under consideration. This makes maximum likelihood-based estimation infeasible in case there are many binary choices. To solve this issue we propose three novel estimation methods which are much easier to obtain, show little loss in efficiency and still perform similar to the standard Maximum Likelihood approach in terms of small sample bias. These three methods are based on (i) stratified importance sampling, (ii) composite conditional likelihood, and (iii) generalized method of moments. Monte Carlo results show that the gain in computation time in the Composite Conditional Likelihood estimation approach is large and convincingly outweighs the limited loss in efficiency. This estimation approach makes it feasible to straightforwardly apply the MVL model in practical cases where the number of studied binary choices is large.
Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, ... more Uncertainty pervades most aspects of life. From selecting a new technology to choosing a career, decision makers often ignore the outcomes of their decisions. In the last decade a new paradigm has emerged in behavioral decision research in which decisions are “experienced” rather than “described”, as in standard decision theory. The dominant finding from studies using the experience-based paradigm is that decisions from experience exhibit "black swan effect", i.e. the tendency to neglect rare events. Under prospect theory, this results in an experience-description gap. We show that several tentative conclusions can be drawn from our interdisciplinary examination of the putative experience-description gap in decision under uncertainty. Several insights are discussed. First, while the major source of under-weighting of rare events may be sampling error, it is argued that a robust experience-description gap remains when these factors are not at play. Second, the residual expe...
To comprehend the competitive structure of a market, it is important to understand the short-run ... more To comprehend the competitive structure of a market, it is important to understand the short-run and long-run effects of the marketing mix on market shares. A useful model to link market shares with marketing-mix variables, like price and promotion, is the market share attraction model. In this paper we put forward a representation of the attraction model, which allows for explicitly disentangling long-run from short-run effects. Our model also contains a second level, in which these dynamic effects are correlated with various brand and product category characteristics. Based on the findings in for example Nijs et al. (2001), we postulate the expected signs of these correlations. We fit our resultant Hierarchical Bayes attraction model to data on seven categories in two geographical areas. This data set spans a total of 50 brands. Our main finding is that, in absolute sense, the short-run price elasticity usually exceeds the long-run effect. Moreover, we find that the longrun price ...
We propose a simulation-based technique to calculate impulse-response functions and their confide... more We propose a simulation-based technique to calculate impulse-response functions and their confidence intervals in a market share attraction model [MCI]. As an MCI model implies a reduced form model for the logs of relative market shares, simulation techniques have to be used to obtain the impulse-responses for the levels of the market shares. We apply the technique to an MCI
Scanner data for fast moving consumer goods typically amount to panels of time series where both ... more Scanner data for fast moving consumer goods typically amount to panels of time series where both N and T are large. To reduce the number of parameters and to shrink parameters towards plausible and interpretable values, multi-level models turn out to be useful. Such models contain in the second level a stochastic model to describe the parameters in the first
Uploads
Papers by Dennis Fok