-
International Journal of Preventive... 2023Noise is one of the most important harmful factors in the environment. There are limited studies on the effect of noise loudness on brain signals and attention. The main...
BACHGROUND
Noise is one of the most important harmful factors in the environment. There are limited studies on the effect of noise loudness on brain signals and attention. The main objective of this study was to investigate the relationship between exposure to different loudness levels with brain index, types of attention, and subjective evaluation.
METHODS
Four noises with different loudness levels were generated. Sixty-four male students participated in this study. Each subject performed the integrated visual and auditory continuous performance test (IVA-2) test before and during exposure to noise loudness signals while their electroencephalography was recorded. Finally, the alpha-to-gamma ratio (AGR), five types of attention, and the subjective evaluation results were examined.
RESULTS
During exposure to loudness levels, the AGR and types of attention decreased while the NASA-Tax Load Index (NASA-TLX) scores increased. The noise exposure at lower loudness levels (65 and 75 phon) leads to greater attention dysfunction than at higher loudness. The AGR was significantly changed during exposure to 65 and 75 phon and audio stimuli. This significant change was observed in exposure at all loudness levels except 85 phon and visual stimuli. The divided and sustained attention changed significantly during exposure to all loudness levels and visual stimuli. The AGR had a significant inverse correlation with the total score of NASA-TLX during noise exposure.
CONCLUSIONS
These results can lead to the design of methods to control the psychological effects of noise at specific frequencies (250 and 4000 Hz) and can prevent non-auditory damage to human cognitive performance in industrial and urban environments.
PubMed: 38264555
DOI: 10.4103/ijpvm.ijpvm_395_22 -
The Journal of the Acoustical Society... Jan 2024Hearing-impaired (HI) listeners have been shown to exhibit increased fusion of dichotic vowels, even with different fundamental frequency (F0), leading to binaural...
Hearing-impaired (HI) listeners have been shown to exhibit increased fusion of dichotic vowels, even with different fundamental frequency (F0), leading to binaural spectral averaging and interference. To determine if similar fusion and averaging occurs for consonants, four natural and synthesized stop consonants (/pa/, /ba/, /ka/, /ga/) at three F0s of 74, 106, and 185 Hz were presented dichotically-with ΔF0 varied-to normal-hearing (NH) and HI listeners. Listeners identified the one or two consonants perceived, and response options included /ta/ and /da/ as fused percepts. As ΔF0 increased, both groups showed decreases in fusion and increases in percent correct identification of both consonants, with HI listeners displaying similar fusion but poorer identification. Both groups exhibited spectral averaging (psychoacoustic fusion) of place of articulation but phonetic feature fusion for differences in voicing. With synthetic consonants, NH subjects showed increased fusion and decreased identification. Most HI listeners were unable to discriminate the synthetic consonants. The findings suggest smaller differences between groups in consonant fusion than vowel fusion, possibly due to the presence of more cues for segregation in natural speech or reduced reliance on spectral cues for consonant perception. The inability of HI listeners to discriminate synthetic consonants suggests a reliance on cues other than formant transitions for consonant discrimination.
Topics: Humans; Hearing Loss, Sensorineural; Speech Perception; Hearing Loss; Psychoacoustics; Phonetics; Hearing
PubMed: 38174963
DOI: 10.1121/10.0024245 -
Physics of Life Reviews Mar 2024
Interpersonal synchrony implies simultaneity, musical improvisation requires rules. Comment on "Musical engagement as a duet of tight synchrony and loose interpretability" by Tal-Chen Rabinowitch.
Topics: Music; Creativity
PubMed: 38160521
DOI: 10.1016/j.plrev.2023.12.007 -
Substance Use & Misuse 2024: Illicit substance use is common at music festivals. One could question whether festival attendees deliberately plan to take drugs at such events or whether their...
: Illicit substance use is common at music festivals. One could question whether festival attendees deliberately plan to take drugs at such events or whether their illicit (poly)drug use is provoked by specific circumstances, such as the presence of peers or a general belief that others are using drugs at the festival. : The present study implemented the prototype willingness model, which is a model that assesses whether illicit drug use at music festivals is rather a rational or a more spontaneous decision-making process. : A three-wave panel survey was conducted, questioning festival attendees before (n = 304, 60.86% males), during, and after music festival visits. In total, 186 people (59.68% males) between 18 and 55 years (M = 27.80 years; SD = 8.19) completed all three surveys, of which 62.9% had taken one or more different illicit substances at the festival. Positive attitudes toward illicit drug consumption were most firmly related to attendees' intentions to take drugs at festivals. Additionally, the more festival visitors identified themselves with the prototype of an attendee using drugs, the more likely they were to be willing to use them. The perceived presence of illicit substances at such events was also strongly related to the actual behavior. : The findings suggest that illicit drug use at music festivals relates to both a rational choice and an unplanned one.
Topics: Male; Humans; Female; Holidays; Music; Substance-Related Disorders; Illicit Drugs; Surveys and Questionnaires
PubMed: 38129990
DOI: 10.1080/10826084.2023.2294979 -
Sensors (Basel, Switzerland) Dec 2023Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support...
Within the broader context of improving interactions between artificial intelligence and humans, the question has arisen regarding whether auditory and rhythmic support could increase attention for visual stimuli that do not stand out clearly from an information stream. To this end, we designed an experiment inspired by pip-and-pop but more appropriate for eliciting attention and P3a-event-related potentials (ERPs). In this study, the aim was to distinguish between targets and distractors based on the subject's electroencephalography (EEG) data. We achieved this objective by employing different machine learning (ML) methods for both individual-subject (IS) and cross-subject (CS) models. Finally, we investigated which EEG channels and time points were used by the model to make its predictions using saliency maps. We were able to successfully perform the aforementioned classification task for both the IS and CS scenarios, reaching classification accuracies up to 76%. In accordance with the literature, the model primarily used the parietal-occipital electrodes between 200 ms and 300 ms after the stimulus to make its prediction. The findings from this research contribute to the development of more effective P300-based brain-computer interfaces. Furthermore, they validate the EEG data collected in our experiment.
Topics: Humans; Artificial Intelligence; Acoustic Stimulation; Electroencephalography; Attention; Event-Related Potentials, P300; Evoked Potentials
PubMed: 38067961
DOI: 10.3390/s23239588 -
PloS One 2023There is debate whether the foundations of consonance and dissonance are rooted in culture or in psychoacoustics. In order to disentangle the contribution of culture and...
There is debate whether the foundations of consonance and dissonance are rooted in culture or in psychoacoustics. In order to disentangle the contribution of culture and psychoacoustics, we considered automatic responses to the perfect fifth and the major second (flattened by 25 cents) intervals alongside conscious evaluations of the same intervals across two cultures and two levels of musical expertise. Four groups of participants completed the tasks: expert performers of Lithuanian Sutartinės, English speaking musicians in Western diatonic genres, Lithuanian non-musicians and English-speaking non-musicians. Sutartinės singers were chosen as this style of singing is an example of 'beat diaphony' where intervals of parts form predominantly rough sonorities and audible beats. There was no difference in automatic responses to intervals, suggesting that an aversion to acoustically rough intervals is not governed by cultural familiarity but may have a physical basis in how the human auditory system works. However, conscious evaluations resulted in group differences with Sutartinės singers rating both the flattened major as more positive than did other groups. The results are discussed in the context of recent developments in consonance and dissonance research.
Topics: Humans; Music; Singing; Psychoacoustics; Recognition, Psychology; Consciousness; Acoustic Stimulation; Auditory Perception
PubMed: 38051728
DOI: 10.1371/journal.pone.0294645 -
American Journal of Audiology Dec 2023In the present report, we reviewed the role of cortical auditory evoked potentials (CAEPs) as an objective measure during the evaluation and management process in...
PURPOSE
In the present report, we reviewed the role of cortical auditory evoked potentials (CAEPs) as an objective measure during the evaluation and management process in children with auditory neuropathy spectrum disorder (ANSD).
METHOD
We reviewed the results of CAEP recordings in 66 patients with ANSD aged between 2 months and 12 years and assessed the relationship between their characteristics (prevalence, morphology, latencies, and amplitudes) and various clinical features, including the mode of medical management.
RESULTS
Overall, the CAEPs were present in 85.2% of the ears tested. Factors such as prematurity, medical complexity, neuronal issues, or presence of syndromes did not have an effect on the presence or absence of CAEPs. CAEP latencies were significantly shorter in ears with cochlear nerve deficiency than in ears with a normal caliber nerve. Three different patterns of CAEP responses were observed in patients with bilateral ANSD and present cochlear nerves: (a) responses with normal morphology and presence of both P1-P2 and N2 components, (b) responses with abnormal morphology and presence of the N2 component but undefined P1-P2 peak, and (c) entirely absent responses. None of the patients with normal, mild, or moderate degree of hearing loss had a complete absence of CAEP responses. No significant differences were uncovered when comparing the latencies across unaided and aided children and children who later received cochlear implants.
CONCLUSIONS
The CAEP protocol used in our ANSD program did inform about the presence or absence of central auditory stimulation. Absent responses typically fit into an overall picture of complete auditory deprivation and all of these children were ultimately offered cochlear implants after failing to develop oral language. Present responses, on the other hand, were acknowledged as a sign of some degree of auditory stimulation but always interpreted with caution given that prognostic implications remain unclear.
PubMed: 38048283
DOI: 10.1044/2023_AJA-23-00051 -
Cognitive Science Dec 2023Definitions of syncopation share two characteristics: the presence of a meter or analogous hierarchical rhythmic structure and a displacement or contradiction of that...
Definitions of syncopation share two characteristics: the presence of a meter or analogous hierarchical rhythmic structure and a displacement or contradiction of that structure. These attributes are translated in terms of a Bayesian theory of syncopation, where the syncopation of a rhythm is inferred based on a hierarchical structure that is, in turn, learned from the ongoing musical stimulus. Several experiments tested its simplest possible implementation, with equally weighted priors associated with different meters and independence of auditory events, which can be decomposed into two terms representing note density and deviation from a metric hierarchy. A computational simulation demonstrated that extant measures of syncopation fall into two distinct factors analogous to the terms in the simple Bayesian model. Next, a series of behavioral experiments found that perceived syncopation is significantly related to both terms, offering support for the general Bayesian construction of syncopation. However, we also found that the prior expectations associated with different metric structures are not equal across meters and that there is an interaction between density and hierarchical deviation, implying that auditory events are not independent from each other. Together, these findings provide evidence that syncopation is a manifestation of a form of temporal expectation that can be directly represented in Bayesian terms and offer a complementary, feature-driven approach to recent Bayesian models of temporal prediction.
Topics: Humans; Auditory Perception; Motivation; Bayes Theorem; Music; Learning
PubMed: 38043104
DOI: 10.1111/cogs.13390 -
Human Factors Nov 2023To design and develop a Portable Auditory Localization Acclimation Training (PALAT) system capable of producing psychoacoustically accurate localization cues; evaluate...
OBJECTIVE
To design and develop a Portable Auditory Localization Acclimation Training (PALAT) system capable of producing psychoacoustically accurate localization cues; evaluate the training effect against a proven full-scale, laboratory-grade system under three listening conditions; and determine if the PALAT system is sensitive to differences among electronic level-dependent hearing protection devices (HPDs).
BACKGROUND
In-laboratory auditory localization training has demonstrated the ability to improve localization performance with the open (natural) ear, that is, unoccluded, and while wearing HPDs. The military requires a portable system capable of imparting similar training benefits as those demonstrated in laboratory experiments.
METHOD
In a full-factorial repeated measures design experiment, 12 audiometrically normal participants completed localization training and testing using an identical, optimized training protocol on two training systems under three listening conditions (open ear, TEP-100, and ComTac™ III). Statistical tests were performed on mean absolute accuracy score and front-back reversal errors.
RESULTS
No statistical difference existed between the PALAT and laboratory-grade DRILCOM systems on two dependent localization accuracy measurements at all stages of training. In addition, the PALAT system detected the same localization performance differences among the three listening conditions.
CONCLUSION
The PALAT system imparted similar training benefits as the DRILCOM system and was sensitive to HPD localization performance differences.
APPLICATION
The user-operable PALAT system and optimized training protocol can be employed by the military, law enforcement, and various industries, to improve auditory localization performance in conditions where auditory situation awareness is critical to safety.
PubMed: 38035629
DOI: 10.1177/00187208231209137 -
Scientific Reports Nov 2023Sensorimotor synchronization strategies have been frequently used for gait rehabilitation in different neurological populations. Despite these positive effects on gait,... (Meta-Analysis)
Meta-Analysis
Sensorimotor synchronization strategies have been frequently used for gait rehabilitation in different neurological populations. Despite these positive effects on gait, attentional processes required to dynamically attend to the auditory stimuli needs elaboration. Here, we investigate auditory attention in neurological populations compared to healthy controls quantified by EEG recordings. Literature was systematically searched in databases PubMed and Web of Science. Inclusion criteria were investigation of auditory attention quantified by EEG recordings in neurological populations in cross-sectional studies. In total, 35 studies were included, including participants with Parkinson's disease (PD), stroke, Traumatic Brain Injury (TBI), Multiple Sclerosis (MS), Amyotrophic Lateral Sclerosis (ALS). A meta-analysis was performed on P3 amplitude and latency separately to look at the differences between neurological populations and healthy controls in terms of P3 amplitude and latency. Overall, neurological populations showed impairments in auditory processing in terms of magnitude and delay compared to healthy controls. Consideration of individual auditory processes and thereafter selecting and/or designing the auditory structure during sensorimotor synchronization paradigms in neurological physical rehabilitation is recommended.
Topics: Humans; Cross-Sectional Studies; Attention; Parkinson Disease; Gait; Electroencephalography
PubMed: 38030693
DOI: 10.1038/s41598-023-47597-5