-
Frontiers in Psychology 2024
PubMed: 38440237
DOI: 10.3389/fpsyg.2024.1375105 -
Topics in Cognitive Science Jul 2019A long-standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely... (Review)
Review
A long-standing question in child language research concerns how children achieve mature syntactic knowledge in the face of a complex linguistic environment. A widely accepted view is that this process involves extracting distributional regularities from the environment in a manner that is incidental and happens, for the most part, without the learner's awareness. In this way, the debate speaks to two associated but separate literatures in language acquisition: statistical learning and implicit learning. Both fields have explored this issue in some depth but, at present, neither the results from the infant studies used by the statistical learning literature nor the artificial grammar learning tasks studies from the implicit learning literature can be used to fully explain how children's syntax becomes adult-like. In this work, we consider an alternative explanation-that children use error-based learning to become mature syntax users. We discuss this proposal in the light of the behavioral findings from structural priming studies and the computational findings from Chang, Dell, and Bock's (2006) dual-path model, which incorporates properties from both statistical and implicit learning, and offers an explanation for syntax learning and structural priming using a common error-based learning mechanism. We then turn our attention to future directions for the field, here suggesting how structural priming might inform the statistical learning and implicit learning literature on the nature of the learning mechanism.
Topics: Child; Child Development; Humans; Learning; Models, Theoretical; Psycholinguistics
PubMed: 30414244
DOI: 10.1111/tops.12396 -
Dementia & Neuropsychologia 2021The differential diagnosis of primary progressive aphasia (PPA) is challenging due to overlapping clinical manifestations of the different variants of the disease. This... (Review)
Review
The differential diagnosis of primary progressive aphasia (PPA) is challenging due to overlapping clinical manifestations of the different variants of the disease. This is particularly true for the logopenic variant of PPA (lvPPA), in which such overlap was reported with regard to impairments in repetition abilities. In this study, four individuals with lvPPA underwent standard neuropsychological and language assessments. The influence of psycholinguistic variables on their performance of in word, nonword and sentence repetition tasks was also specifically explored. Some level of heterogeneity was found in cognitive functions and in language. The four participants showed impairment in sentence repetition in which their performance was negatively affected by semantic reversibility and syntactic complexity. This study supports the heterogeneity of lvPPA with respect to the cognitive and linguistic status of participants. It also shows that sentence repetition is influenced not only by length, but also by semantic reversibility and syntactic complexity, two psycholinguistic variables known to place additional demands on phonological working memory.
PubMed: 34630930
DOI: 10.1590/1980-57642021dn15-030014 -
Frontiers in Psychology 2021Though the term NATIVE SPEAKER/SIGNER is frequently used in language research, it is inconsistently conceptualized. Factors, such as age, order, and context of...
Though the term NATIVE SPEAKER/SIGNER is frequently used in language research, it is inconsistently conceptualized. Factors, such as age, order, and context of acquisition, in addition to social/cultural identity, are often differentially conflated. While the ambiguity and harmful consequences of the term NATIVE SPEAKER have been problematized across disciplines, much of this literature attempts to repurpose the term in order to include and/or exclude certain populations. This paper problematizes NATIVE SPEAKER within psycholinguistics, arguing that the term is both unhelpful to rigorous theory construction and harmful to marginalized populations by reproducing normative assumptions about behavior, experience, and identity. We propose that language researchers avoid NATIVE SPEAKER altogether, and we suggest alternate ways of characterizing language experience/use. The vagueness of NATIVE SPEAKER can create problems in research design (e.g., through systematically excluding certain populations), recruitment (as participants' definitions might diverge from researchers'), and analysis (by distilling continuous factors into under-specified binary categories). This can result in barriers to cross-study comparison, which is particularly concerning for theory construction and replicability. From a research ethics perspective, it matters how participants are characterized and included: Excluding participants based on binary/essentialist conceptualizations of nativeness upholds deficit perspectives toward multilingualism and non-hegemonic modes of language acquisition. Finally, by implicitly assuming the existence of a critical period, NATIVE SPEAKER brings with it theoretical baggage which not all researchers may want to carry. Given the issues above and how 'nativeness' is racialized (particularly in European and North American contexts), we ask that researchers consider carefully whether exclusion of marginalized/minoritized populations is necessary or justified-particularly when NATIVE SPEAKER is used only as a way to achieve linguistic homogeneity. Instead, we urge psycholinguists to explicitly state the specific axes traditionally implied by NATIVENESS that they wish to target. We outline several of these (e.g., order of acquisition, allegiance, and comfort with providing intuitions) and give examples of how to recruit and describe participants while eschewing NATIVE SPEAKER. Shifting away from harmful conventions, such as NATIVE SPEAKER, will not only improve research design and analysis, but also is one way we can co-create a more just and inclusive field.
PubMed: 34659029
DOI: 10.3389/fpsyg.2021.715843 -
Neuroscience and Biobehavioral Reviews Jan 2020This review of the neuroscience of anger is part of The Human Affectome Project, where we attempt to map anger and its components (i.e., physiological, cognitive,... (Review)
Review
This review of the neuroscience of anger is part of The Human Affectome Project, where we attempt to map anger and its components (i.e., physiological, cognitive, experiential) to the neuroscience literature (i.e., genetic markers, functional imaging of human brain networks) and to linguistic expressions used to describe anger feelings. Given the ubiquity of anger in both its normative and chronic states, specific language is used in humans to express states of anger. Following a review of the neuroscience literature, we explore the language that is used to convey angry feelings, as well as metaphors reflecting inner states of anger experience. We then discuss whether these linguistic expressions can be mapped on to the neural circuits during anger experience and to distinct components of anger. We also identify relationships between anger components, brain networks, and other affective research relevant to motivational states of dominance and basic needs for safety.
Topics: Aggression; Amygdala; Anger; Cerebral Cortex; Humans; Nerve Net; Psycholinguistics; Self-Control
PubMed: 31809773
DOI: 10.1016/j.neubiorev.2019.12.002 -
Behavior Research Methods Oct 2021This paper introduces the Grievance Dictionary, a psycholinguistic dictionary that can be used to automatically understand language use in the context of...
This paper introduces the Grievance Dictionary, a psycholinguistic dictionary that can be used to automatically understand language use in the context of grievance-fueled violence threat assessment. We describe the development of the dictionary, which was informed by suggestions from experienced threat assessment practitioners. These suggestions and subsequent human and computational word list generation resulted in a dictionary of 20,502 words annotated by 2318 participants. The dictionary was validated by applying it to texts written by violent and non-violent individuals, showing strong evidence for a difference between populations in several dictionary categories. Further classification tasks showed promising performance, but future improvements are still needed. Finally, we provide instructions and suggestions for the use of the Grievance Dictionary by security professionals and (violence) researchers.
Topics: Humans; Language; Psycholinguistics; Writing
PubMed: 33755932
DOI: 10.3758/s13428-021-01536-2 -
Neuroscience and Biobehavioral Reviews Oct 2019Fear is an emotion that serves as a driving factor in how organisms move through the world. In this review, we discuss the current understandings of the subjective... (Review)
Review
Fear is an emotion that serves as a driving factor in how organisms move through the world. In this review, we discuss the current understandings of the subjective experience of fear and the related biological processes involved in fear learning and memory. We first provide an overview of fear learning and memory in humans and animal models, encompassing the neurocircuitry and molecular mechanisms, the influence of genetic and environmental factors, and how fear learning paradigms have contributed to treatments for fear-related disorders, such as posttraumatic stress disorder. Current treatments as well as novel strategies, such as targeting the perisynaptic environment and use of virtual reality, are addressed. We review research on the subjective experience of fear and the role of autobiographical memory in fear-related disorders. We also discuss the gaps in our understanding of fear learning and memory, and the degree of consensus in the field. Lastly, the development of linguistic tools for assessments and treatment of fear learning and memory disorders is discussed.
Topics: Animals; Fear; Humans; Learning; Memory, Episodic; Phobic Disorders; Psycholinguistics; Stress Disorders, Post-Traumatic
PubMed: 30970272
DOI: 10.1016/j.neubiorev.2019.03.015 -
Frontiers in Neurorobotics 2020Crossmodal interaction in situated language comprehension is important for effective and efficient communication. The relationship between linguistic and visual stimuli... (Review)
Review
Crossmodal interaction in situated language comprehension is important for effective and efficient communication. The relationship between linguistic and visual stimuli provides mutual benefit: While vision contributes, for instance, information to improve language understanding, language in turn plays a role in driving the focus of attention in the visual environment. However, language and vision are two different representational modalities, which accommodate different aspects and granularities of conceptualizations. To integrate them into a single, coherent system solution is still a challenge, which could profit from inspiration by human crossmodal processing. Based on fundamental psycholinguistic insights into the nature of situated language comprehension, we derive a set of performance characteristics facilitating the robustness of language understanding, such as crossmodal reference resolution, attention guidance, or predictive processing. Artificial systems for language comprehension should meet these characteristics in order to be able to perform in a natural and smooth manner. We discuss how empirical findings on the crossmodal support of language comprehension in humans can be applied in computational solutions for situated language comprehension and how they can help to mitigate the shortcomings of current approaches.
PubMed: 32116634
DOI: 10.3389/fnbot.2020.00002 -
The Journal of Neuroscience : the... Jun 2023To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a...
To understand language, we need to recognize words and combine them into phrases and sentences. During this process, responses to the words themselves are changed. In a step toward understanding how the brain builds sentence structure, the present study concerns the neural readout of this adaptation. We ask whether low-frequency neural readouts associated with words change as a function of being in a sentence. To this end, we analyzed an MEG dataset by Schoffelen et al. (2019) of 102 human participants (51 women) listening to sentences and word lists, the latter lacking any syntactic structure and combinatorial meaning. Using temporal response functions and a cumulative model-fitting approach, we disentangled delta- and theta-band responses to lexical information (word frequency), from responses to sensory and distributional variables. The results suggest that delta-band responses to words are affected by sentence context in time and space, over and above entropy and surprisal. In both conditions, the word frequency response spanned left temporal and posterior frontal areas; however, the response appeared later in word lists than in sentences. In addition, sentence context determined whether inferior frontal areas were responsive to lexical information. In the theta band, the amplitude was larger in the word list condition ∼100 milliseconds in right frontal areas. We conclude that low-frequency responses to words are changed by sentential context. The results of this study show how the neural representation of words is affected by structural context and as such provide insight into how the brain instantiates compositionality in language. Human language is unprecedented in its combinatorial capacity: we are capable of producing and understanding sentences we have never heard before. Although the mechanisms underlying this capacity have been described in formal linguistics and cognitive science, how they are implemented in the brain remains to a large extent unknown. A large body of earlier work from the cognitive neuroscientific literature implies a role for delta-band neural activity in the representation of linguistic structure and meaning. In this work, we combine these insights and techniques with findings from psycholinguistics to show that meaning is more than the sum of its parts; the delta-band MEG signal differentially reflects lexical information inside and outside sentence structures.
Topics: Humans; Female; Language; Brain; Linguistics; Psycholinguistics; Brain Mapping; Semantics
PubMed: 37221093
DOI: 10.1523/JNEUROSCI.0964-22.2023 -
Frontiers in Human Neuroscience 2023In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech...
In face-to-face communication, humans are faced with multiple layers of discontinuous multimodal signals, such as head, face, hand gestures, speech and non-speech sounds, which need to be interpreted as coherent and unified communicative actions. This implies a fundamental computational challenge: optimally binding only signals belonging to the same communicative action while segregating signals that are not connected by the communicative content. How do we achieve such an extraordinary feat, reliably, and efficiently? To address this question, we need to further move the study of human communication beyond speech-centred perspectives and promote a multimodal approach combined with interdisciplinary cooperation. Accordingly, we seek to reconcile two explanatory frameworks recently proposed in psycholinguistics and sensory neuroscience into a neurocognitive model of multimodal face-to-face communication. First, we introduce a psycholinguistic framework that characterises face-to-face communication at three parallel processing levels: multiplex signals, multimodal gestalts and multilevel predictions. Second, we consider the recent proposal of a lateral neural visual pathway specifically dedicated to the dynamic aspects of social perception and reconceive it from a multimodal perspective ("lateral processing pathway"). Third, we reconcile the two frameworks into a neurocognitive model that proposes how multiplex signals, multimodal gestalts, and multilevel predictions may be implemented along the lateral processing pathway. Finally, we advocate a multimodal and multidisciplinary research approach, combining state-of-the-art imaging techniques, computational modelling and artificial intelligence for future empirical testing of our model.
PubMed: 36816496
DOI: 10.3389/fnhum.2023.1108354