-
Topics in Cognitive Science Oct 2016It is well established that language production and comprehension are influenced by information status, for example, whether information is given, new, topical, or... (Review)
Review
It is well established that language production and comprehension are influenced by information status, for example, whether information is given, new, topical, or predictable, and many scholars suggest that an important component of information status is keeping track of what information is in common ground (i.e., what is shared), and what is not. Information status affects both speakers' choices (e.g., word order, pronoun use, prosodic prominence) and how listeners interpret the speaker's meaning (e.g., Chafe, 1994; Prince, 1981). Although there is a wealth of scholarly work on information status (for a review, see Arnold, Kaiser, Kahn, & Kim, 2013), there is no consensus on the mechanisms by which it is used, and in fact relatively little discussion of the underlying representations and psycholinguistic mechanisms. Moreover, a major challenge to understanding information status is that its effects are notoriously variable. This study considers existing proposals about information status, focusing on two questions: (a) how is it represented; and (b) by what mechanisms is it used? I propose that it is important to consider whether representations and mechanisms can be classified as either explicit or emergent. Based on a review of existing evidence, I argue that information status representations are most likely emergent, but the mechanisms by which they are used are both explicit and emergent. This review provides one of the first considerations of information status processing across multiple domains.
Topics: Choice Behavior; Comprehension; Humans; Language; Psycholinguistics; Speech Perception
PubMed: 27766755
DOI: 10.1111/tops.12220 -
Topics in Cognitive Science Jan 2020Framed in psychological terms, the basic question of linguistic theory is what is stored in memory, and in what form. Traditionally, what is stored is divided into...
Framed in psychological terms, the basic question of linguistic theory is what is stored in memory, and in what form. Traditionally, what is stored is divided into grammar and lexicon, where grammar contains the rules and the lexicon is an unstructured list of exceptions. We develop an alternative view in which rules of grammar are simply lexical items that contain variables, and in which rules have two functions. In their generative function, they are used to build novel structures, just as in traditional generative linguistics. In their relational function, they capture generalizations over stored items in the lexicon, a role not seriously explored in traditional linguistic theory. The result is a highly structured lexicon with rich patterns among stored items. We further explore the possibility that this sort of structuring is not specific to language, but appears in other cognitive domains as well, such as the structure of physical objects, of music, and of geographical and social knowledge. The differences among cognitive domains do not lie in this overall texture, but in the materials over which stored relations are defined. The challenge is to develop theories of representation in these other domains comparable to that for language.
Topics: Humans; Linguistics; Memory; Phonetics; Psycholinguistics; Psychological Theory; Semantics
PubMed: 29772110
DOI: 10.1111/tops.12334 -
Topics in Cognitive Science Jul 2019A long-standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely... (Review)
Review
A long-standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult-like. In this work, we consider an alternative explanation-that children use error-based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual-path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error-based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
Topics: Child; Child Development; Humans; Learning; Models, Theoretical; Psycholinguistics
PubMed: 30414244
DOI: 10.1111/tops.12396 -
Behavior Research Methods Oct 2021This paper introduces the Grievance Dictionary, a psycholinguistic dictionary that can be used to automatically understand language use in the context of...
This paper introduces the Grievance Dictionary, a psycholinguistic dictionary that can be used to automatically understand language use in the context of grievance-fueled violence threat assessment. We describe the development of the dictionary, which was informed by suggestions from experienced threat assessment practitioners. These suggestions and subsequent human and computational word list generation resulted in a dictionary of 20,502 words annotated by 2318 participants. The dictionary was validated by applying it to texts written by violent and non-violent individuals, showing strong evidence for a difference between populations in several dictionary categories. Further classification tasks showed promising performance, but future improvements are still needed. Finally, we provide instructions and suggestions for the use of the Grievance Dictionary by security professionals and (violence) researchers.
Topics: Humans; Language; Psycholinguistics; Writing
PubMed: 33755932
DOI: 10.3758/s13428-021-01536-2 -
Topics in Cognitive Science Jan 2018In this commentary on "Memory and Common Ground Processes in Language Use," I draw attention to relevant work on mindreading. The concerns of research on common ground... (Review)
Review
In this commentary on "Memory and Common Ground Processes in Language Use," I draw attention to relevant work on mindreading. The concerns of research on common ground and mindreading have significant overlap, but these literatures have worked in relative isolation of each other. I attempt an assimilation, pointing out shared and distinctive concerns and mutually informative results.
Topics: Humans; Psycholinguistics; Social Perception; Theory of Mind
PubMed: 29143472
DOI: 10.1111/tops.12308 -
Topics in Cognitive Science Apr 2020We present a theoretical framework bearing on the evolution of written communication. We analyze writing as a special kind of graphic code. Like languages, graphic codes...
We present a theoretical framework bearing on the evolution of written communication. We analyze writing as a special kind of graphic code. Like languages, graphic codes consist of stable, conventional mappings between symbols and meanings, but (unlike spoken or signed languages) their symbols consist of enduring images. This gives them the unique capacity to transmit information in one go across time and space. Yet this capacity usually remains quite unexploited, because most graphic codes are insufficiently informative. They may only be used for mnemonic purposes or as props for oral communication in real-time encounters. Writing systems, unlike other graphic codes, work by encoding a natural language. This allows them to support asynchronous communication in a more powerful and versatile way than any other graphic code. Yet, writing systems will not automatically unlock the capacity to communicate asynchronously. We argue that this capacity is a rarity in non-literate societies, and not so frequent even in literate ones. Asynchronous communication is intrinsically inefficient because asynchrony constrains the amount of information that the interlocutors share and limits possibilities for repair. This would explain why synchronous, face-to-face communication always fosters the development of sophisticated codes (natural languages), but similar codes for asynchronous communication evolve with more difficulties. It also implies that writing cannot have evolved, at first, for supporting asynchronous communication.
Topics: Communication; Cultural Evolution; Humans; Psycholinguistics; Time Factors; Writing
PubMed: 30306732
DOI: 10.1111/tops.12386 -
Topics in Cognitive Science Jan 2020Children use syntax to learn verbs, in a process known as syntactic bootstrapping. The structure-mapping account proposes that syntactic bootstrapping begins with a...
Children use syntax to learn verbs, in a process known as syntactic bootstrapping. The structure-mapping account proposes that syntactic bootstrapping begins with a universal bias to map each noun phrase in a sentence onto a participant role in a structured conceptual representation of an event. Equipped with this bias, children interpret the number of noun phrases accompanying a new verb as evidence about the semantic predicate-argument structure of the sentence, and therefore about the meaning of the verb. In this paper, we first review evidence for the structure-mapping account, and then discuss challenges to the account arising from the existence of languages that allow verbs' arguments to be omitted, such as Korean. These challenges prompt us to (a) refine our notion of the distributional learning mechanisms that create representations of sentence structure, and (b) propose that an expectation of discourse continuity allows children to gather linguistic evidence for each verb's arguments across sentences in a coherent discourse. Taken together, the proposed learning mechanisms and biases sketch a route whereby simple aspects of sentence structure guide verb learning from the start of multi-word sentence comprehension, and do so even if some of the new verb's arguments are omitted due to discourse redundancy.
Topics: Child Development; Child, Preschool; Concept Formation; Humans; Infant; Language Development; Learning; Psycholinguistics
PubMed: 31419084
DOI: 10.1111/tops.12447 -
Topics in Cognitive Science Jul 2020In many domains of human cognition, hierarchically structured representations are thought to play a key role. In this paper, we start with some foundational definitions... (Review)
Review
In many domains of human cognition, hierarchically structured representations are thought to play a key role. In this paper, we start with some foundational definitions of key phenomena like "sequence" and "hierarchy," and then outline potential signatures of hierarchical structure that can be observed in behavioral and neuroimaging data. Appropriate behavioral methods include classic ones from psycholinguistics along with some from the more recent artificial grammar learning and sentence processing literature. We then turn to neuroimaging evidence for hierarchical structure with a focus on the functional MRI literature. We conclude that, although a broad consensus exists about a role for a neural circuit incorporating the inferior frontal gyrus, the superior temporal sulcus, and the arcuate fasciculus, considerable uncertainty remains about the precise computational function(s) of this circuitry. An explicit theoretical framework, combined with an empirical approach focusing on distinguishing between plausible alternative hypotheses, will be necessary for further progress.
Topics: Functional Neuroimaging; Humans; Memory; Models, Theoretical; Nerve Net; Psycholinguistics
PubMed: 31364310
DOI: 10.1111/tops.12442 -
Perspectives on Psychological Science :... Nov 2019Models that represent meaning as high-dimensional numerical vectors-such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the... (Review)
Review
Models that represent meaning as high-dimensional numerical vectors-such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the aggregate language environment (BEAGLE), topic models, global vectors (GloVe), and word2vec-have been introduced as extremely powerful machine-learning proxies for human semantic representations and have seen an explosive rise in popularity over the past 2 decades. However, despite their considerable advancements and spread in the cognitive sciences, one can observe problems associated with the adequate presentation and understanding of some of their features. Indeed, when these models are examined from a cognitive perspective, a number of unfounded arguments tend to appear in the psychological literature. In this article, we review the most common of these arguments and discuss (a) what exactly these models represent at the implementational level and their plausibility as a cognitive theory, (b) how they deal with various aspects of meaning such as polysemy or compositionality, and (c) how they relate to the debate on embodied and grounded cognition. We identify common misconceptions that arise as a result of incomplete descriptions, outdated arguments, and unclear distinctions between theory and implementation of the models. We clarify and amend these points to provide a theoretical basis for future research and discussions on vector models of semantic representation.
Topics: Humans; Models, Theoretical; Psycholinguistics; Psychological Theory; Semantics
PubMed: 31505121
DOI: 10.1177/1745691619861372 -
Philosophical Transactions of the Royal... Jan 2020Syntax has been found in animal communication but only humans appear to have generative, hierarchically structured syntax. How did syntax evolve? I discuss three...
Syntax has been found in animal communication but only humans appear to have generative, hierarchically structured syntax. How did syntax evolve? I discuss three theories of evolutionary transition from animal to human syntax: computational capacity, structural flexibility and event perception. The computation hypothesis is supported by artificial grammar experiments consistently showing that only humans can learn linear stimulus sequences with an underlying hierarchical structure, a possible by-product of computationally powerful large brains. The structural flexibility hypothesis is supported by evidence of meaning-bearing combinatorial and permutational signal sequences in animals, with sometimes compositional features, but no evidence for generativity or hierarchical structure. Again, animals may be constrained by computational limits in short-term memory but possibly also by limits in articulatory control and social cognition. The event categorization hypothesis, finally, posits that humans are cognitively predisposed to analyse natural events by assigning agency and assessing how agents impact on patients, a propensity that is reflected by the basic syntactic units in all languages. Whether animals perceive natural events in the same way is largely unknown, although event perception may provide the cognitive grounding for syntax evolution. This article is part of the theme issue 'What can animal communication teach us about human language?'
Topics: Animal Communication; Animals; Biological Evolution; Brain; Cognition; Humans; Language; Learning; Linguistics; Memory; Psycholinguistics; Semantics
PubMed: 31735152
DOI: 10.1098/rstb.2019.0062