-
Explore (New York, N.Y.) 2023During the COVID-19 pandemic medical and holistic health practitioners turned to utilizing virtual healthcare. As energy healing practitioners and educators who shifted...
CONTEXT
During the COVID-19 pandemic medical and holistic health practitioners turned to utilizing virtual healthcare. As energy healing practitioners and educators who shifted to an online format, it seemed important to document descriptions of client experiences of virtual energy healing.
OBJECTIVE
To describe client experiences of virtual energy healing sessions.
DESIGN
Descriptive pre-post intervention design.
SETTING AND INTERVENTIONS
Two experienced and eclectic energy healing practitioners developed a protocol and conducted energy healing sessions via Zoom.
PARTICIPANTS
A convenience sample of Sisters of St. Joseph of Carondelet (CSJ) Consociates, people of diverse life-styles and spiritual traditions who are committed to living the mission of the CSJs in the St. Paul Province.
MAIN OUTCOME MEASURES
Pre-post 10-point Likert scale rating of relaxation, well-being, and pain. Pre-post primarily qualitative questionnaires.
RESULTS
Results indicated significant pre-post differences: pre-session relaxation (M=5.036, SD = 2.9) and post-session relaxation (M=7.86, SD = 6.4): t(13)=2.16, p=.0017*; pre-session well-being (M=5.86, SD = 4.29); post-session well-being (M=8, SD = 2.31), t(13), p=.0001*; pre-session pain (M=4.0, SD = 6.15) and post-session pain (M=2.25, SD = 3.41), t(13)=2.16, p=.004*. Thematic analysis revealed six themes related to client experiences of virtual energy healing: 1) embodied sensations, 2) relaxation, 3) release - a letting go of tasks/anxieties/worries, 4) sense of peace/joy/calm, 5) connection to themselves, others, and something larger, and 6) surprise that virtual energy healing works.
LIMITATIONS
This was a descriptive study using a convenience sample, therefore, there was not a control group, a large sample size, and the sample might be more prone to report better results than the general population because of their spiritual perspectives. Results were not generalizable.
IMPLICATIONS
Clients reported positive descriptions of virtual energy healing and say they would do it again. However more research is needed to understand the variables that influenced the results and the underlying mechanisms of action.
Topics: Humans; Pandemics; Surveys and Questionnaires; Pain; Anxiety; Holistic Health
PubMed: 37270354
DOI: 10.1016/j.explore.2023.03.012 -
Frontiers in Psychology 2023Early linguistic background, and in particular, access to language, lays the foundation of future reading skills in deaf and hard-of-hearing signers. The current study...
INTRODUCTION
Early linguistic background, and in particular, access to language, lays the foundation of future reading skills in deaf and hard-of-hearing signers. The current study aims to estimate the impact of two factors - early access to sign and/or spoken language - on reading fluency in deaf and hard-of-hearing adult Russian Sign Language speakers.
METHODS
In the eye-tracking experiment, 26 deaf and 14 hard-of-hearing native Russian Sign Language speakers read 144 sentences from the Russian Sentence Corpus. Analysis of global eye-movement trajectories (scanpaths) was used to identify clusters of typical reading trajectories. The role of early access to sign and spoken language as well as vocabulary size as predictors of the more fluent reading pattern was tested.
RESULTS
Hard-of-hearing signers with early access to sign language read more fluently than those who were exposed to sign language later in life or deaf signers without access to speech sounds. No association between early access to spoken language and reading fluency was found.
DISCUSSION
Our results suggest a unique advantage for the hard-of-hearing individuals from having early access to both sign and spoken language and support the existing claims that early exposure to sign language is beneficial not only for deaf but also for hard-of-hearing children.
PubMed: 37799519
DOI: 10.3389/fpsyg.2023.1145638 -
Sensors (Basel, Switzerland) Jan 2024Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable...
Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.
Topics: Humans; United States; Sign Language; Motion Capture; Neurons; Perception; Wearable Electronic Devices
PubMed: 38257544
DOI: 10.3390/s24020453 -
International Journal of Clinical... 2023The aim of the study is to compare the effectiveness of visual and sign motivation on the oral hygiene of students with hearing and speech impairment studying in special...
AIM
The aim of the study is to compare the effectiveness of visual and sign motivation on the oral hygiene of students with hearing and speech impairment studying in special schools of Meerut, Uttar Pradesh, India.
MATERIALS AND METHODS
A cross-sectional study was carried out on 200 students. The sample was divided into two groups. Ethical clearance was obtained from the Institutional Ethical Committee. Data were collected at three points of time-at baseline, 1st and 3rd month.
RESULTS
In the age-group, 8-13 years, on intergroup comparison of mean oral hygiene index (OHI) score, no significant difference was observed on the first visit (-value of 0.351) and second visit, respectively (-value of 0.687), but on comparing the mean simplified oral hygiene index (OHI-S) score on third visit significant difference was observed (-value of 0.03) and in the age 14-18 years, on intergroup comparison of mean OHI-S score no significant difference was observed on first visit (-value of 0.593) and second visit, respectively (-value of 0.404), but on comparing the mean OHI-S score on third visit, significant difference was observed (-value of 0.018) Both the groups have shown that there was the positive impact of reinforcement on the oral hygiene of students in this age-group as well.
CONCLUSION
There was a significant improvement in oral hygiene status and a significant improvement in participant satisfaction toward oral health in both groups. Sign language video playback is not as effective and efficient in improving the maintenance of oral health in hearing and speech-impaired children as compared to sign language.
CLINICAL SIGNIFICANCE
This study has helped in the better understanding of different methods of maintaining good oral hygiene of hearing and speech-impaired children.
HOW TO CITE THIS ARTICLE
Singh R, Saraf BG, Sheoran N, Comparison of Effectiveness of Visual and Sign Motivation on the Oral Hygiene of Students. Int J Clin Pediatr Dent 2023;16(5):671-677.
PubMed: 38162250
DOI: 10.5005/jp-journals-10005-2640 -
Heliyon Jan 2024Sign language recognition (SLR) contains the capability to convert sign language gestures into spoken or written language. This technology is helpful for deaf persons or...
Sign language recognition (SLR) contains the capability to convert sign language gestures into spoken or written language. This technology is helpful for deaf persons or hard of hearing by providing them with a way to interact with people who do not know sign language. It is also be utilized for automatic captioning in live events and videos. There are distinct methods of SLR comprising deep learning (DL), computer vision (CV), and machine learning (ML). One general approach utilises cameras for capturing the signer's hand and body movements and processing the video data for recognizing the gestures. One of challenges with SLR comprises the variability in sign language through various cultures and individuals, the difficulty of certain signs, and require for realtime processing. This study introduces an Automated Sign Language Detection and Classification using Reptile Search Algorithm with Hybrid Deep Learning (SLDC-RSAHDL). The presented SLDC-RSAHDL technique detects and classifies different types of signs using DL and metaheuristic optimizers. In the SLDC-RSAHDL technique, MobileNet feature extractor is utilized to produce feature vectors, and its hyperparameters can be adjusted by manta ray foraging optimization (MRFO) technique. For sign language classification, the SLDC-RSAHDL technique applies HDL model, which incorporates the design of Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM). At last, the RSA was exploited for the optimal hyperparameter selection of the HDL model, which resulted in an improved detection rate. The experimental result analysis of the SLDC-RSAHDL technique on sign language dataset demonstrates the improved performance of the SLDC-RSAHDL system over other existing DL techniques.
PubMed: 38148822
DOI: 10.1016/j.heliyon.2023.e23252 -
Topics in Early Childhood Special... Aug 2023Deaf and hard of hearing (DHH) children experience systematic barriers to equitable education due to intentional or unintentional ableist views that can lead to a...
Deaf and hard of hearing (DHH) children experience systematic barriers to equitable education due to intentional or unintentional ableist views that can lead to a general lack of awareness about the value of natural sign languages, and insufficient resources supporting sign language development. Furthermore, an imbalance of information in favor of spoken languages often stems from a phonocentric perspective that views signing as an inferior form of communication that also hinders development of spoken language. To the contrary, research demonstrates that early adoption of a natural sign language confers critical protection from the risks of language deprivation without endangering spoken language development. In this position paper, we draw attention to deep societal biases about language in information presented to parents of DHH children, against early exposure to a natural sign language. We outline actions that parents and professionals can adopt to maximize DHH children's chances for on-time language development.
PubMed: 37766876
DOI: 10.1177/02711214211031307 -
Data in Brief Feb 2024Tamil is one of the oldest existing languages, spoken by around 65 million people across India, Sri Lanka and South-East Asia. Countries such as Fiji and South Africa...
Tamil is one of the oldest existing languages, spoken by around 65 million people across India, Sri Lanka and South-East Asia. Countries such as Fiji and South Africa also have a significant population with Tamil ancestry. Tamil is a complex language and has 247 characters. A labelled dataset for Tamil Fingerspelling named TLFS23 has been created for research related to vision-based Fingerspelling translators for the Speech and hearing Impaired. The dataset would open up avenues to develop automated systems as translators and interpreters for effective communication between fingerspelling language users and non- users, using computer vision and deep learning algorithms. One thousand images representing each unique finger flexion motion for every Tamil character was collected overall constituting a large dataset with 248 classes with a total of 2,55,155 images. The images were contributed by 120 individuals from different age groups. The dataset is made publicly available at: https://data.mendeley.com/datasets/39kzs5pxmk/2.
PubMed: 38229923
DOI: 10.1016/j.dib.2023.109961 -
NPJ Science of Learning Dec 2023Research has shown a link between the acquisition of numerical concepts and language, but exactly how linguistic input matters for numerical development remains unclear....
Research has shown a link between the acquisition of numerical concepts and language, but exactly how linguistic input matters for numerical development remains unclear. Here, we examine both symbolic (number word knowledge) and non-symbolic (numerical discrimination) numerical abilities in a population in which access to language is limited early in development-oral deaf and hard of hearing (DHH) preschoolers born to hearing parents who do not know a sign language. The oral DHH children demonstrated lower numerical discrimination skills, verbal number knowledge, conceptual understanding of the word "more", and vocabulary relative to their hearing peers. Importantly, however, analyses revealed that group differences in the numerical tasks, but not vocabulary, disappeared when differences in the amount of time children had had auditory access to spoken language input via hearing technology were taken into account. Results offer insights regarding the role language plays in emerging number concepts.
PubMed: 38071222
DOI: 10.1038/s41539-023-00202-w -
Journal of Multidisciplinary Healthcare 2024While the services available to deaf people in the Middle East have yet to be documented, they need improvement in several countries. The aim of this article was to...
PURPOSE
While the services available to deaf people in the Middle East have yet to be documented, they need improvement in several countries. The aim of this article was to reduce miscommunication between dentists and deaf patients through the introduction of an optional sign language course for pre-doctoral students and faculty of dentistry at King Abdulaziz University (KAUFD).
PATIENTS AND METHODS
All fourth-year pre-doctoral students were invited to participate in an Arabic sign language course. A survey with 11 multiple choice and 38 true/false questions with an "I don't know" option was distributed, both before and two weeks after the course. This survey was extensively validated and pilot-tested before distribution.
RESULTS
The response rate was 141 students (84.9%), 49 of which were males (34.8%) and 92 of which were females (65.2%). The pre-doctoral students had a higher overall knowledge score (mean 22.9±14.8) and sign language skills (11.1±1.7) after the course compared to before the course (9.8±7.1, and 3.7±3.3, respectively) (all P-value <0.001). All the pre-course individual questions had lower scores compared to the post-course questions (P-value <0.05).
CONCLUSION
Deaf people might face difficulties communicating at dental health care clinics, which may be improved by equipping dentistry providers with cultural competency training, like this course.
PubMed: 38222476
DOI: 10.2147/JMDH.S420388 -
Journal of Imaging Oct 2023Communication between Deaf and hearing individuals remains a persistent challenge requiring attention to foster inclusivity. Despite notable efforts in the development...
Communication between Deaf and hearing individuals remains a persistent challenge requiring attention to foster inclusivity. Despite notable efforts in the development of digital solutions for sign language recognition (SLR), several issues persist, such as cross-platform interoperability and strategies for tokenizing signs to enable continuous conversations and coherent sentence construction. To address such issues, this paper proposes a non-invasive Portuguese Sign Language ( or LGP) interpretation system-as-a-service, leveraging skeletal posture sequence inference powered by long-short term memory (LSTM) architectures. To address the scarcity of examples during machine learning (ML) model training, dataset augmentation strategies are explored. Additionally, a buffer-based interaction technique is introduced to facilitate LGP terms tokenization. This technique provides real-time feedback to users, allowing them to gauge the time remaining to complete a sign, which aids in the construction of grammatically coherent sentences based on inferred terms/words. To support human-like conditioning rules for interpretation, a large language model (LLM) service is integrated. Experiments reveal that LSTM-based neural networks, trained with 50 LGP terms and subjected to data augmentation, achieved accuracy levels ranging from 80% to 95.6%. Users unanimously reported a high level of intuition when using the buffer-based interaction strategy for terms/words tokenization. Furthermore, tests with an LLM-specifically ChatGPT-demonstrated promising semantic correlation rates in generated sentences, comparable to expected sentences.
PubMed: 37998082
DOI: 10.3390/jimaging9110235