Mass univariate analysis is a relatively new approach for the study of ERPs/ERFs. It consists of ... more Mass univariate analysis is a relatively new approach for the study of ERPs/ERFs. It consists of many statistical tests and one of several powerful corrections for multiple comparisons. Multiple comparison corrections differ in their power and permissiveness. Moreover, some methods are not guaranteed to work or may be overly sensitive to uninteresting deviations from the null hypothesis. Here we report the results of simulations assessing the accuracy, permissiveness, and power of six popular multiple comparison corrections (permutation-based control of the family-wise error rate: FWER, weak control of FWER via cluster-based permutation tests, permutation based control of the generalized FWER, and three false discovery rate control procedures) using realistic ERP data. In addition, we look at the sensitivity of permutation tests to differences in population variance. These results will help researchers apply and interpret these procedures.
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on m... more Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the family-wise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation.
The phonemic restoration effect refers to the tendency for people to hallucinate a phoneme replac... more The phonemic restoration effect refers to the tendency for people to hallucinate a phoneme replaced by a non-speech sound (e.g., a tone) in a word. This illusion can be influenced by preceding sentential context providing information about the likelihood of the missing phoneme. The saliency of the illusion suggests that supportive context can affect relatively low (phonemic or lower) levels of speech processing. Indeed, a previous event-related brain potential (ERP) investigation of the phonemic restoration effect found that the processing of coughs replacing high versus low probability phonemes in sentential words differed from each other as early as the auditory N1 (120-180 ms post-stimulus); this result, however, was confounded by physical differences between the high and low probability speech stimuli, thus it could have been caused by factors such as habituation and not by supportive context. We conducted a similar ERP experiment avoiding this confound by using the same auditory stimuli preceded by text that made critical phonemes more or less probable. We too found the robust N400 effect of phoneme/word probability, but did not observe the early N1 effect. We did however observe a left posterior effect of phoneme/word probability around 192-224 ms -- clear evidence of a relatively early effect of supportive sentence context in speech comprehension distinct from the N400.
Proceedings of Corpus Linguistics 2003, Jan 1, 2003
Over the past five decades, psycholinguists have uncovered robust differences in the processing o... more Over the past five decades, psycholinguists have uncovered robust differences in the processing of concrete and abstract words. One of these findings is that it is easier for people to generate possible contexts for concrete words than for abstract words; that is, concrete words seem to have higher "context availability" (CA). It is not clear why this difference exists, but some have suggested that concrete words may be used in a smaller variety of semantic contexts. In this paper, we review the relevant psycholinguistic literature and report on a previous corpus-based attempt to investigate this property of abstract and concrete words. We then extend the current methods by introducing an information theoretic measure that we use to test the validity of CA. The result runs counter to current thinking in psycholinguistics and suggests a rethinking of context availability.
Independent component analysis (ICA) is a potentially powerful tool for analyzing event-related p... more Independent component analysis (ICA) is a potentially powerful tool for analyzing event-related potentials (ERPs), one of the most popular measures of brain function in cognitive neuroscience. Based on the statistics of the electroencephalogram (EEG), from which ERPs are derived, ICA may be able to extract multiple, functionally distinct sources of an ERP generated by disparate regions of cerebral cortex. Extracting such sources greatly increases the informativeness of ERPs by providing a cleaner, less ambiguous measure of source activity and by facilitating the identification of this activity across different experimental paradigms. The main purpose of this review article is to explain the logic of ICA, to illustrate how ICA could in principle extract spatiotemporally overlapping ERP sources, and to review evidence that ICA is a well motivated methodology that can extract latent ERP sources in practice. In addition, we close the article by noting potential problems with ICA and by comparing it to three alternative methods for extracting ERP sources/components: spatial principal component analysis, source localization, and temporal principal component analysis.
Independent component analysis (ICA) is a family of unsupervised learning algorithms that have pr... more Independent component analysis (ICA) is a family of unsupervised learning algorithms that have proven useful for the analysis of the electroencephalogram (EEG) and magnetoencephalogram (MEG). ICA decomposes an EEG/MEG data set into a basis of maximally temporally independent components (ICs) that are learned from the data. As with any statistic, a concern with using ICA is the degree to which the estimated ICs are reliable. An IC may not be reliable if ICA was trained on insufficient data, if ICA training was stopped prematurely or at a local minimum (for some algorithms), or if multiple global minima were present. Consequently, evidence of ICA reliability is critical for the credibility of ICA results. In this paper, we present a new algorithm for assessing the reliability of ICs based on applying ICA separately to split-halves of a data set. This algorithm improves upon existing methods in that it considers both IC scalp topographies and activations, uses a probabilistically interpretable threshold for accepting ICs as reliable, and requires applying ICA only three times per data set. As evidence of the method’s validity, we show that the method can perform comparably to more time intensive bootstrap resampling and depends in a reasonable manner on the amount of training data. Finally, using the method we illustrate the importance of checking the reliability of ICs by demonstrating that IC reliability is dramatically increased by removing the mean EEG at each channel for each epoch of data rather than the mean EEG in a prestimulus baseline.
UMI, ProQuest ® Dissertations & Theses. The world's most comprehensive collectio... more UMI, ProQuest ® Dissertations & Theses. The world's most comprehensive collection of dissertations and theses. Learn more... ProQuest, Common independent components of the P3b, N400, and P600 ERP components to deviant linguistic events. ...
Mass univariate analysis is a relatively new approach for the study of ERPs/ERFs. It consists of ... more Mass univariate analysis is a relatively new approach for the study of ERPs/ERFs. It consists of many statistical tests and one of several powerful corrections for multiple comparisons. Multiple comparison corrections differ in their power and permissiveness. Moreover, some methods are not guaranteed to work or may be overly sensitive to uninteresting deviations from the null hypothesis. Here we report the results of simulations assessing the accuracy, permissiveness, and power of six popular multiple comparison corrections (permutation-based control of the family-wise error rate: FWER, weak control of FWER via cluster-based permutation tests, permutation based control of the generalized FWER, and three false discovery rate control procedures) using realistic ERP data. In addition, we look at the sensitivity of permutation tests to differences in population variance. These results will help researchers apply and interpret these procedures.
Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on m... more Event-related potentials (ERPs) and magnetic fields (ERFs) are typically analyzed via ANOVAs on mean activity in a priori windows. Advances in computing power and statistics have produced an alternative, mass univariate analyses consisting of thousands of statistical tests and powerful corrections for multiple comparisons. Such analyses are most useful when one has little a priori knowledge of effect locations or latencies, and for delineating effect boundaries. Mass univariate analyses complement and, at times, obviate traditional analyses. Here we review this approach as applied to ERP/ERF data and four methods for multiple comparison correction: strong control of the family-wise error rate (FWER) via permutation tests, weak control of FWER via cluster-based permutation tests, false discovery rate control, and control of the generalized FWER. We end with recommendations for their use and introduce free MATLAB software for their implementation.
The phonemic restoration effect refers to the tendency for people to hallucinate a phoneme replac... more The phonemic restoration effect refers to the tendency for people to hallucinate a phoneme replaced by a non-speech sound (e.g., a tone) in a word. This illusion can be influenced by preceding sentential context providing information about the likelihood of the missing phoneme. The saliency of the illusion suggests that supportive context can affect relatively low (phonemic or lower) levels of speech processing. Indeed, a previous event-related brain potential (ERP) investigation of the phonemic restoration effect found that the processing of coughs replacing high versus low probability phonemes in sentential words differed from each other as early as the auditory N1 (120-180 ms post-stimulus); this result, however, was confounded by physical differences between the high and low probability speech stimuli, thus it could have been caused by factors such as habituation and not by supportive context. We conducted a similar ERP experiment avoiding this confound by using the same auditory stimuli preceded by text that made critical phonemes more or less probable. We too found the robust N400 effect of phoneme/word probability, but did not observe the early N1 effect. We did however observe a left posterior effect of phoneme/word probability around 192-224 ms -- clear evidence of a relatively early effect of supportive sentence context in speech comprehension distinct from the N400.
Proceedings of Corpus Linguistics 2003, Jan 1, 2003
Over the past five decades, psycholinguists have uncovered robust differences in the processing o... more Over the past five decades, psycholinguists have uncovered robust differences in the processing of concrete and abstract words. One of these findings is that it is easier for people to generate possible contexts for concrete words than for abstract words; that is, concrete words seem to have higher "context availability" (CA). It is not clear why this difference exists, but some have suggested that concrete words may be used in a smaller variety of semantic contexts. In this paper, we review the relevant psycholinguistic literature and report on a previous corpus-based attempt to investigate this property of abstract and concrete words. We then extend the current methods by introducing an information theoretic measure that we use to test the validity of CA. The result runs counter to current thinking in psycholinguistics and suggests a rethinking of context availability.
Independent component analysis (ICA) is a potentially powerful tool for analyzing event-related p... more Independent component analysis (ICA) is a potentially powerful tool for analyzing event-related potentials (ERPs), one of the most popular measures of brain function in cognitive neuroscience. Based on the statistics of the electroencephalogram (EEG), from which ERPs are derived, ICA may be able to extract multiple, functionally distinct sources of an ERP generated by disparate regions of cerebral cortex. Extracting such sources greatly increases the informativeness of ERPs by providing a cleaner, less ambiguous measure of source activity and by facilitating the identification of this activity across different experimental paradigms. The main purpose of this review article is to explain the logic of ICA, to illustrate how ICA could in principle extract spatiotemporally overlapping ERP sources, and to review evidence that ICA is a well motivated methodology that can extract latent ERP sources in practice. In addition, we close the article by noting potential problems with ICA and by comparing it to three alternative methods for extracting ERP sources/components: spatial principal component analysis, source localization, and temporal principal component analysis.
Independent component analysis (ICA) is a family of unsupervised learning algorithms that have pr... more Independent component analysis (ICA) is a family of unsupervised learning algorithms that have proven useful for the analysis of the electroencephalogram (EEG) and magnetoencephalogram (MEG). ICA decomposes an EEG/MEG data set into a basis of maximally temporally independent components (ICs) that are learned from the data. As with any statistic, a concern with using ICA is the degree to which the estimated ICs are reliable. An IC may not be reliable if ICA was trained on insufficient data, if ICA training was stopped prematurely or at a local minimum (for some algorithms), or if multiple global minima were present. Consequently, evidence of ICA reliability is critical for the credibility of ICA results. In this paper, we present a new algorithm for assessing the reliability of ICs based on applying ICA separately to split-halves of a data set. This algorithm improves upon existing methods in that it considers both IC scalp topographies and activations, uses a probabilistically interpretable threshold for accepting ICs as reliable, and requires applying ICA only three times per data set. As evidence of the method’s validity, we show that the method can perform comparably to more time intensive bootstrap resampling and depends in a reasonable manner on the amount of training data. Finally, using the method we illustrate the importance of checking the reliability of ICs by demonstrating that IC reliability is dramatically increased by removing the mean EEG at each channel for each epoch of data rather than the mean EEG in a prestimulus baseline.
UMI, ProQuest ® Dissertations & Theses. The world's most comprehensive collectio... more UMI, ProQuest ® Dissertations & Theses. The world's most comprehensive collection of dissertations and theses. Learn more... ProQuest, Common independent components of the P3b, N400, and P600 ERP components to deviant linguistic events. ...
Uploads
Papers by David Groppe
ERPs are derived, ICA may be able to extract multiple, functionally distinct sources of an ERP generated by disparate regions of cerebral cortex. Extracting such sources greatly increases the informativeness of ERPs by providing a cleaner, less ambiguous measure of source activity and by facilitating the identification of this activity across different experimental paradigms. The main purpose of this review article is to explain the logic of ICA, to illustrate how ICA could in principle extract spatiotemporally overlapping ERP sources, and to review evidence that ICA is a well motivated methodology that can extract latent ERP sources in practice. In addition, we close the article by noting potential problems with ICA and by comparing it to three alternative methods for extracting ERP sources/components: spatial principal component analysis, source localization, and temporal principal component analysis.
ERPs are derived, ICA may be able to extract multiple, functionally distinct sources of an ERP generated by disparate regions of cerebral cortex. Extracting such sources greatly increases the informativeness of ERPs by providing a cleaner, less ambiguous measure of source activity and by facilitating the identification of this activity across different experimental paradigms. The main purpose of this review article is to explain the logic of ICA, to illustrate how ICA could in principle extract spatiotemporally overlapping ERP sources, and to review evidence that ICA is a well motivated methodology that can extract latent ERP sources in practice. In addition, we close the article by noting potential problems with ICA and by comparing it to three alternative methods for extracting ERP sources/components: spatial principal component analysis, source localization, and temporal principal component analysis.