Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change

meeting of the association for computational linguistics, 2016.

Cited by: 427|Bibtex|Views54|Links
EI
Keywords:
statistical lawfrequent wordpositive point-wise mutual informationsemantic changeword frequencyMore(5+)
Weibo:
We show how distributional methods can reveal statistical laws of semantic change and offer a robust methodology for future work in this area

Abstract:

Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test. Word embeddings show promise as a diachronic tool, but have not been carefully evaluated. We develop a robust methodology for quantifying semantic ...More

Code:

Data:

0
Introduction
  • Shifts in word meaning exhibit systematic regularities (Breal, 1897; Ullmann, 1962). The rate of semantic change, for example, is higher in some words than others (Blank, 1999) — compare the stable semantic history of cat (from ProtoGermanic kattuz, “cat”) to the varied meanings of English cast: “to mould”, “a collection of actors’, “a hardened bandage”, etc. (all from Old Norse kasta, “to throw”, Simpson et al, 1989).

    Various hypotheses have been offered about such regularities in semantic change, such as an increasing subjectification of meaning, or the grammaticalization of inferences (e.g., Geeraerts, 1997; Blank, 1999; Traugott and Dasher, 2001).

    But many core questions about semantic change remain unanswered.
  • The rate of semantic change, for example, is higher in some words than others (Blank, 1999) — compare the stable semantic history of cat to the varied meanings of English cast: “to mould”, “a collection of actors’, “a hardened bandage”, etc..
  • Various hypotheses have been offered about such regularities in semantic change, such as an increasing subjectification of meaning, or the grammaticalization of inferences (e.g., Geeraerts, 1997; Blank, 1999; Traugott and Dasher, 2001).
  • What is the role of word frequency in meaning change?
Highlights
  • Shifts in word meaning exhibit systematic regularities (Breal, 1897; Ullmann, 1962)
  • Frequency plays a key role in other linguistic changes, associated sometimes with faster change—sound changes like lenition occur in more frequent words—and sometimes with slower change—high frequency words are more resistant to morphological regularization (Bybee, 2007; Pagel et al, 2007; Lieberman et al, 2007)
  • We show how diachronic embeddings can be used in a large-scale cross-linguistic analysis to reveal statistical laws that relate frequency and polysemy to semantic change
  • We show how distributional methods can reveal statistical laws of semantic change and offer a robust methodology for future work in this area
  • Our work builds upon a wealth of previous research on quantitative approaches to semantic change, including prior work with distributional methods (Sagi et al, 2011; Wijaya and Yeniterzi, 2011; Gulordava and Baroni, 2011; Jatowt and Duh, 2014; Kulkarni et al, 2014; Xu and Kemp, 2015), as well as recent work on detecting the emergence of novel word senses (Lau et al, 2012; Mitra et al, 2014; Cook et al, 2014; Mitra et al, 2015; Frermann and Lapata, 2016)
  • We extend these lines of work by rigorously comparing different approaches to quantifying semantic change and by using these methods to propose new statistical laws of semantic change
Methods
  • SVD performs best on the synchronic accuracy task and has higher average accuracy on the ‘detection’ task, while SGNS performs best on the ‘discovery’ task
  • These results suggest that both these methods are reasonable choices for studies of semantic change but that they each have their own tradeoffs: SVD is more sensitive, as it performs well on detection tasks even when using a small dataset, but this sensitivity results in false discoveries due to corpus artifacts.
  • The authors found SGNS to be most useful for discovering new shifts and visualizing changes (e.g., Figure 1), while SVD was most effective for detecting subtle shifts in usage
Conclusion
Summary
  • Introduction:

    Shifts in word meaning exhibit systematic regularities (Breal, 1897; Ullmann, 1962). The rate of semantic change, for example, is higher in some words than others (Blank, 1999) — compare the stable semantic history of cat (from ProtoGermanic kattuz, “cat”) to the varied meanings of English cast: “to mould”, “a collection of actors’, “a hardened bandage”, etc. (all from Old Norse kasta, “to throw”, Simpson et al, 1989).

    Various hypotheses have been offered about such regularities in semantic change, such as an increasing subjectification of meaning, or the grammaticalization of inferences (e.g., Geeraerts, 1997; Blank, 1999; Traugott and Dasher, 2001).

    But many core questions about semantic change remain unanswered.
  • The rate of semantic change, for example, is higher in some words than others (Blank, 1999) — compare the stable semantic history of cat to the varied meanings of English cast: “to mould”, “a collection of actors’, “a hardened bandage”, etc..
  • Various hypotheses have been offered about such regularities in semantic change, such as an increasing subjectification of meaning, or the grammaticalization of inferences (e.g., Geeraerts, 1997; Blank, 1999; Traugott and Dasher, 2001).
  • What is the role of word frequency in meaning change?
  • Methods:

    SVD performs best on the synchronic accuracy task and has higher average accuracy on the ‘detection’ task, while SGNS performs best on the ‘discovery’ task
  • These results suggest that both these methods are reasonable choices for studies of semantic change but that they each have their own tradeoffs: SVD is more sensitive, as it performs well on detection tasks even when using a small dataset, but this sensitivity results in false discoveries due to corpus artifacts.
  • The authors found SGNS to be most useful for discovering new shifts and visualizing changes (e.g., Figure 1), while SVD was most effective for detecting subtle shifts in usage
  • Conclusion:

    The authors' work builds upon a wealth of previous research on quantitative approaches to semantic change, including prior work with distributional methods (Sagi et al, 2011; Wijaya and Yeniterzi, 2011; Gulordava and Baroni, 2011; Jatowt and Duh, 2014; Kulkarni et al, 2014; Xu and Kemp, 2015), as well as recent work on detecting the emergence of novel word senses (Lau et al, 2012; Mitra et al, 2014; Cook et al, 2014; Mitra et al, 2015; Frermann and Lapata, 2016).
  • Future studies of semantic change must account for frequency’s conforming effect: when examining the interaction between some linguistic process and semantic change, the law of conformity should serve as a null model in which the interaction is driven primarily by underlying frequency effects
Tables
  • Table1: Six large historical datasets from various languages and sources are used
  • Table2: Set of attested historical shifts used to evaluate the methods. The examples are taken from previous works on semantic change and from the Oxford English Dictionary (OED), e.g. using ‘obsolete’ tags. The shift start points were estimated using attestation dates in the OED. The first six examples are words that shifted dramatically in meaning while the remaining four are words that acquired new meanings (while potentially also keeping their old ones)
  • Table3: Performance on detection task, i.e. ability to capture the attested shifts from Table 2. SGNS and SVD capture the correct directionality of the shifts in all cases (%Correct), e.g., gay becomes more similar to homosexual, but there are differences in whether the methods deem the shifts to be statistically significant at the p < 0.05 level (%Sig)
  • Table4: Top-10 English words with the highest semantic displacement values between the 1900s and 1990s. Bolded entries correspond to real semantic shifts, as deemed by examining the literature and their nearest neighbors; for example, headed shifted from primarily referring to the “top of a body/entity” to referring to “a direction of travel.” Underlined entries are borderline cases that are largely due to global genre/discourse shifts; for example, male has not changed in meaning, but its usage in discussions of “gender equality” is relatively new. Finally, unmarked entries are clear corpus artifacts; for example, special, cover, and romance are artifacts from the covers of fiction books occasionally including advertisements etc
  • Table5: Example words that changed dramatically in meaning in three languages, discovered using SGNS embeddings. The examples were selected from the top-10 most-changed lists between 1900s and 1990s as in Table 4. In English, wanting underwent subjectification and shifted from meaning “lacking” to referring to subjective ”desire”, as in “the education system is wanting” (1900s) vs. ”I’ve been wanting to tell you” (1990s). In French asile (“asylum”) shifted from primarily referring to “hospitals, or infirmaries” to also referring to “asylum seekers, or refugees”. Finally, in German Widerstand (“resistance”) gained a formal meaning as referring to the local German resistance to Nazism during World War II
  • Table6: The top-10 most and least polysemous words in the ENGFIC data. Words like yet, even, and still are used in many diverse ways and are highly polysemous. In contrast, words like photocopying, postage, and holster tend to be used in very specific well-clustered contexts, corresponding to a single sense; for example, mail and letter are both very likely to occur in the context of postage and are also likely to co-occur with each other, independent of postage
Download tables as Excel
Funding
  • W.H. was supported by an NSERC PGS-D grant and the SAP Stanford Graduate Fellowship
  • W.H., D.J., and J.L. were supported by the Stanford Data Science Initiative, and NSF Awards IIS-1514268, IIS-1149837, and IIS-1159679
Reference
  • James S. Adelman, Gordon D. A. Brown, and Jose F. Quesada. 2006. Contextual diversity, not word frequency, determines word-naming and lexical decision times. Psychol. Sci., 17(9):814–823.
    Google ScholarLocate open access versionFindings
  • Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python. O’Reilly Media, Inc.
    Google ScholarFindings
  • Andreas Blank. 1999. Why do new meanings occur? A cognitive typology of the motivations for lexical semantic change. In Peter Koch and Andreas Blank, editors, Historical Semantics and Cognition. Walter de Gruyter, Berlin, Germany.
    Google ScholarLocate open access versionFindings
  • Robert Boyd and Peter J Richerson. 1988. Culture and the Evolutionary Process. University of Chicago Press, Chicago, IL.
    Google ScholarFindings
  • Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In Proc. ACL, pages 136–145.
    Google ScholarLocate open access versionFindings
  • Michel Breal. 1897. Essai de Semantique: Science des significations. Hachette, Paris, France.
    Google ScholarFindings
  • John A. Bullinaria and Joseph P. Levy. 200Extracting semantic representations from word cooccurrence statistics: A computational study. Behav. Res. Methods, 39(3):510–526.
    Google ScholarLocate open access versionFindings
  • John A. Bullinaria and Joseph P. Levy. 2012. Extracting semantic representations from word cooccurrence statistics: stop-lists, stemming, and SVD. Behav. Res. Methods, 44(3):890–907.
    Google ScholarLocate open access versionFindings
  • J.L. Bybee. 2007. Frequency of Use And the Organization of Language. Oxford University Press, New York City, NY.
    Google ScholarFindings
  • Paul Cook, Jey Han Lau, Diana McCarthy, and Timothy Baldwin. 2014. Novel Word-sense Identification. In Proc. COLING, pages 1624–1635.
    Google ScholarLocate open access versionFindings
  • Scott Crossley, Tom Salsbury, and Danielle McNamara. 2010. The development of polysemy and frequency use in english second language speakers. Language Learning, 60(3):573–605.
    Google ScholarLocate open access versionFindings
  • Mark Davies. 2010. The Corpus of Historical American English: 400 million words, 1810-2009. http://corpus.byu.edu/coha/.
    Findings
  • Beate Dorow and Dominic Widdows. 2003. Discovering corpus-specific word senses. In Proc. EACL, pages 79–82.
    Google ScholarLocate open access versionFindings
  • Christiane Fellbaum. 1998. WordNet. Wiley Online Library.
    Google ScholarFindings
  • Olivier Ferret. 2004. Discovering word senses from a network of lexical cooccurrences. In Proc. COLING, page 1326.
    Google ScholarLocate open access versionFindings
  • J.R. Firth. 1957. A Synopsis of Linguistic Theory, 1930-1955. In Studies in Linguistic Analysis. Special volume of the Philological Society. Basil Blackwell, Oxford, UK.
    Google ScholarFindings
  • Lea Frermann and Mirella Lapata. 2016. A Bayesian Model of Diachronic Meaning Change. Trans. ACL, 4:31–45.
    Google ScholarLocate open access versionFindings
  • Dirk Geeraerts. 1997. Diachronic Prototype Semantics: A Contribution to Historical Lexicology. Clarendon Press, Oxford, UK.
    Google ScholarFindings
  • Michael H. Graham. 2003. Confronting multicollinearity in ecological multiple regression. Ecology, 84(11):2809–2815.
    Google ScholarLocate open access versionFindings
  • Kristina Gulordava and Marco Baroni. 2011. A distributional similarity approach to the detection of semantic change in the Google Books Ngram corpus. In Proc. GEMS 2011 Workshop on Geometrical Models of Natural Language Semantics, pages 67–71. Association for Computational Linguistics.
    Google ScholarLocate open access versionFindings
  • Zellig S. Harris. 1954. Distributional structure. Word, 10:146–162.
    Google ScholarLocate open access versionFindings
  • Paul J. Hopper and Elizabeth Closs Traugott. 2003. Grammaticalization. Cambridge University Press, Cambridge, UK.
    Google ScholarFindings
  • Adam Jatowt and Kevin Duh. 2014. A framework for analyzing semantic change of words across time. In Proc. ACM/IEEE-CS Conf. on Digital Libraries, pages 229–238. IEEE Press.
    Google ScholarLocate open access versionFindings
  • R. Jeffers and Ilse Lehiste. 1979. Principles and Methods for Historical Linguistics. MIT Press, Cambridge, MA.
    Google ScholarFindings
  • Adam Kilgarriff. 2004. How dominant is the commonest sense of a word? In Text, Speech and Dialogue, pages 103–111. Springer.
    Google ScholarLocate open access versionFindings
  • Yoon Kim, Yi-I. Chiu, Kentaro Hanaki, Darshan Hegde, and Slav Petrov. 2014. Temporal analysis of language through neural language models. arXiv preprint arXiv:1405.3515.
    Findings
  • Vivek Kulkarni, Rami Al-Rfou, Bryan Perozzi, and Steven Skiena. 2014. Statistically significant detection of linguistic change. In Proc. WWW, pages 625–635.
    Google ScholarLocate open access versionFindings
  • Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychol. Rev., 104(2):211.
    Google ScholarLocate open access versionFindings
  • Jey Han Lau, Paul Cook, Diana McCarthy, David Newman, and Timothy Baldwin. 2012. Word sense induction for novel sense detection. In Proc. EACL, pages 591–601.
    Google ScholarLocate open access versionFindings
  • Omer Levy, Yoav Goldberg, and Ido Dagan. 2015. Improving distributional similarity with lessons learned from word embeddings. Trans. ACL, 3.
    Google ScholarLocate open access versionFindings
  • Qi Li and Jeffrey Scott Racine. 2007. Nonparametric econometrics: theory and practice. Princeton University Press, Princeton, NJ.
    Google ScholarFindings
  • Erez Lieberman, Jean-Baptiste Michel, Joe Jackson, Tina Tang, and Martin A. Nowak. 2007. Quantifying the evolutionary dynamics of language. Nature, 449(7163):713–716.
    Google ScholarLocate open access versionFindings
  • Yuri Lin, Jean-Baptiste Michel, Erez Lieberman Aiden, Jon Orwant, Will Brockman, and Slav Petrov. 2012. Syntactic annotations for the google books ngram corpus. In Proc. ACL, System Demonstrations, pages 169–174.
    Google ScholarLocate open access versionFindings
  • Charles E McCulloch and John M Neuhaus. 2001. Generalized linear mixed models. WileyInterscience, Hoboken, NJ.
    Google ScholarFindings
  • Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, and others. 2011. Quantitative analysis of culture using millions of digitized books. Science, 331(6014):176–182.
    Google ScholarLocate open access versionFindings
  • Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119.
    Google ScholarLocate open access versionFindings
  • Sunny Mitra, Ritwik Mitra, Martin Riedl, Chris Biemann, Animesh Mukherjee, and Pawan Goyal. 2014. That’s sick dude!: Automatic identification of word sense change across different timescales. In Proc. ACL.
    Google ScholarLocate open access versionFindings
  • Sunny Mitra, Ritwik Mitra, Suman Kalyan Maity, Martin Riedl, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2015. An automatic approach to identify word sense changes in text media across timescales. Natural Language Engineering, 21(05):773–798.
    Google ScholarLocate open access versionFindings
  • Shinichi Nakagawa and Holger Schielzeth. 2013. A general and simple method for obtaining R 2 from generalized linear mixed-effects models. Methods Ecol. Evol., 4(2):133–142.
    Google ScholarLocate open access versionFindings
  • Mark Pagel, Quentin D. Atkinson, and Andrew Meade. 2007. Frequency of word-use predicts rates of lexical evolution throughout Indo-European history. Nature, 449(7163):717–720.
    Google ScholarLocate open access versionFindings
  • Eitan Adam Pechenick, Christopher M. Danforth, and Peter Sheridan Dodds. 2015. Characterizing the Google Books corpus: Strong limits to inferences of socio-cultural and linguistic evolution. PLoS ONE, 10(10).
    Google ScholarLocate open access versionFindings
  • F. Reali and T. L. Griffiths. 2010. Words as alleles: connecting language evolution with Bayesian learners to models of genetic drift. Proc. R. Soc. B, 277(1680):429–436.
    Google ScholarLocate open access versionFindings
  • Eyal Sagi, Stefan Kaufmann, and Brady Clark. 2011. Tracing semantic change with latent semantic analysis. In Kathryn Allan and Justyna A. Robinson, editors, Current Methods in Historical Semantics, page 161. De Gruyter Mouton, Berlin, Germany.
    Google ScholarLocate open access versionFindings
  • Benoit Sagot, Lionel Clement, Eric de La Clergerie, and Pierre Boullier. 2006. The Lefff 2 syntactic lexicon for French: architecture, acquisition, use. In Proc. LREC, pages 1–4.
    Google ScholarLocate open access versionFindings
  • Gerold Schneider and Martin Volk. 1998. Adding manual constraints and lexical look-up to a Brilltagger for German. In Proceedings of the ESSLLI98 Workshop on Recent Advances in Corpus Annotation, Saarbrucken.
    Google ScholarLocate open access versionFindings
  • Peter H Schonemann. 1966. A generalized solution of the orthogonal Procrustes problem. Psychometrika, 31(1):1–10.
    Google ScholarLocate open access versionFindings
  • J.S. Seabold and J. Perktold. 2010. Statsmodels: Econometric and statistical modeling with python. In Proc. 9th Python in Science Conference.
    Google ScholarLocate open access versionFindings
  • John Andrew Simpson, Edmund SC Weiner, et al. 1989. The Oxford English Dictionary, volume 2. Clarendon Press Oxford, Oxford, UK.
    Google ScholarLocate open access versionFindings
  • Elizabeth Closs Traugott and Richard B Dasher. 2001. Regularity in Semantic Change. Cambridge University Press, Cambridge, UK.
    Google ScholarFindings
  • Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. J. Artif. Intell. Res., 37(1):141–188.
    Google ScholarLocate open access versionFindings
  • S. Ullmann. 1962. Semantics: An Introduction to the Science of Meaning. Barnes & Noble, New York City, NY.
    Google ScholarFindings
  • Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of Machine Learning Research, 9(2579-2605):85.
    Google ScholarLocate open access versionFindings
  • Duncan J Watts and Steven H Strogatz. 1998. Collective dynamics of ‘small-world’networks. Nature, 393(6684):440–442.
    Google ScholarLocate open access versionFindings
  • Derry Tanti Wijaya and Reyyan Yeniterzi. 2011. Understanding semantic change of words over centuries. In Proc. Workshop on Detecting and Exploiting Cultural Diversity on the Social Web, pages 35– 40. ACM.
    Google ScholarLocate open access versionFindings
  • David P Wilkins. 1993. From part to person: Natural tendencies of semantic change and the search for cognates. Cognitive Anthropology Research Group at the Max Planck Institute for Psycholinguistics.
    Google ScholarFindings
  • B. Winter, Graham Thompson, and Matthias Urban. 2014. Cognitive Factors Motivating The Evolution Of Word Meanings: Evidence From Corpora, Behavioral Data And Encyclopedic Network Structure. In Proc. EVOLANG, pages 353–360.
    Google ScholarLocate open access versionFindings
  • Yang Xu and Charles Kemp. 2015. A computational evaluation of two laws of semantic change. In Proc. Annual Conf. of the Cognitive Science Society.
    Google ScholarLocate open access versionFindings
  • Yang Xu, Terry Regier, and Barbara C. Malt. 2015. Historical Semantic Chaining and Efficient Communication: The Case of Container Names. Cognitive Science.
    Google ScholarFindings
  • Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Marta Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural language engineering, 11(02):207–238.
    Google ScholarLocate open access versionFindings
  • George Kingsley Zipf. 1945. The meaning-frequency relationship of words. J. Gen. Psychol., 33(2):251– 256.
    Google ScholarLocate open access versionFindings
  • BaharIlgen and Bahar Karaoglan. 2007. Investigation of Zipf’s ‘law-of-meaning’on Turkish corpora. In International Symposium on Computer and Information Sciences, pages 1–6. IEEE.
    Google ScholarLocate open access versionFindings
  • For all methods, we used the hyperparameters recommended in Levy et al. (2015). For the context word distributions in all methods, we used context distribution smoothing with a smoothing parameter of 0.75. Note that for SGNS this corresponds to smoothing the unigram negative sampling distribution. For both, SGNS and PPMI, we set the negative sample prior α = log(5), while we set this value to α = 0 for SVD, as this improved results. When using SGNS on the Google data, we also subsampled, with words being random removed with probability pr(wi) = 1 −
    Google ScholarLocate open access versionFindings
  • , as recommended by Levy et al. (2015) and Mikolov et al. (2013). Furthermore, to improve the computational efficiency of SGNS (which works with text streams and not co-occurrence counts), we downsampled the larger years in the Google N-
    Google ScholarLocate open access versionFindings
  • For all methods, we defined the context set to simply be the same vocabulary as the target words, as is standard in most word vector applications (Levy et al., 2015). However, we found that the PPMI method benefited substantially from larger contexts (similar results were found in Bullinaria and Levy, 2007), so we did not remove any lowfrequency words per year from the context for that method. The other embedding approaches did not appear to benefit from the inclusion of these lowfrequency terms, so they were dropped for computational efficiency.
    Google ScholarLocate open access versionFindings
  • For SGNS, we used the implementation provided in Levy et al. (2015). The implementations for PPMI and SVD are released with the code package associated with this work.
    Google ScholarLocate open access versionFindings
  • To visualize semantic change for a word wi in two dimensions we employed the following procedure, which relies on the t-SNE embedding method (Van der Maaten and Hinton, 2008) as a subroutine: 1. Find the union of the word wi’s k nearest neighbors over all necessary time-points.
    Google ScholarFindings
  • 2. Compute the t-SNE embedding of these words on the most recent (i.e., the modern) time-point.
    Google ScholarFindings
  • 3. For each of the previous time-points, hold all embeddings fixed, except for the target word’s (i.e., the embedding for wi), and optimize a new t-SNE embedding only for the target word. We found that initializing the embedding for the target word to be the centroid of its k -nearest neighbors in a timepoint was highly effective. In addition to the pre-processing mentioned in the main text, we also normalized the contextual diversity scores d(wi) within years by subtracting the yearly median. This was necessary because there was substantial changes in the median contextual diversity scores over years due to changes in corpus sample sizes etc. Data points corresponding to words that occurred less than 500 times during a time-period were also discarded, as these points lack sufficient data to robustly estimate change rates (this threshold only came into effect on the COHA data, however). We removed stop words and proper nouns by (i) removing all stop-words from the available lists in Python’s NLTK package (Bird et al., 2009) and (ii) restricting our analysis to words with part-of-speech (POS) tags corresponding to four main linguistic categories (common nouns, verbs, adverbs, and adjectives), using the POS sources in Table 1.
    Google ScholarFindings
  • To fit the linear mixed models, we used the Python statsmodels package with restricted maximum likelihood estimation (REML) (Seabold and Perktold, 2010). All mentioned significance scores were computed according to Wald’s z-tests, though these results agreed with Bonferroni corrected likelihood ratio tests on the eng-all data.
    Google ScholarFindings
  • The visualizations in Figure 2 were computed on the eng-all data and correspond to bootstrapped locally-linear kernel regressions with bandwidths selected via the AIC Hurvitch criteria (Li and Racine, 2007).
    Google ScholarFindings
Your rating :
0

 

Tags
Comments