-
NPJ Digital Medicine Sep 2023The rapid advancement of telehealth technologies has the potential to revolutionize healthcare delivery, especially in developing countries and resource-limited... (Review)
Review
The rapid advancement of telehealth technologies has the potential to revolutionize healthcare delivery, especially in developing countries and resource-limited settings. Telehealth played a vital role during the COVID-19 pandemic, supporting numerous healthcare services. We conducted a systematic review to gain insights into the characteristics, barriers, and successful experiences in implementing telehealth during the COVID-19 pandemic in China, a representative of the developing countries. We also provide insights for other developing countries that face similar challenges to developing and using telehealth during or after the pandemic. This systematic review was conducted through searching five prominent databases including PubMed/MEDLINE, Embase, Scopus, Cochrane Library, and Web of Science. We included studies clearly defining any use of telehealth services in all aspects of health care during the COVID-19 pandemic in China. We mapped the barriers, successful experiences, and recommendations based on the Consolidated Framework for Implementation Research (CFIR). A total of 32 studies met the inclusion criteria. Successfully implementing and adopting telehealth in China during the pandemic necessitates strategic planning across aspects at society level (increasing public awareness and devising appropriate insurance policies), organizational level (training health care professionals, improving workflows, and decentralizing tasks), and technological level (strategic technological infrastructure development and designing inclusive telehealth systems). WeChat, a widely used social networking platform, was the most common platform used for telehealth services. China's practices in addressing the barriers may provide implications and evidence for other developing countries or low-and middle- income countries (LMICs) to implement and adopt telehealth systems.
PubMed: 37723237
DOI: 10.1038/s41746-023-00908-6 -
High incidence of trigger finger after carpal tunnel release: a systematic review and meta-analysis.International Journal of Surgery... Aug 2023Trigger finger (TF) often occurs after carpal tunnel release (CTR), but the mechanism and outcomes remain inconsistent. This study evaluated the incidence of TF after... (Meta-Analysis)
Meta-Analysis
INTRODUCTION
Trigger finger (TF) often occurs after carpal tunnel release (CTR), but the mechanism and outcomes remain inconsistent. This study evaluated the incidence of TF after CTR and its related risk factors.
MATERIALS AND METHODS
PubMed, Embase, and Scopus databases were searched up to 27 August 2022, with the following keywords: "carpal tunnel release" and "trigger finger". Studies with complete data on the incidence of TF after CTR and published full text. The primary outcome was the association between CTR and the subsequent occurrence of the TF and to calculate the pooled incidence of post-CTR TF. The secondary outcomes included the potential risk factors among patients with and without post-CTR TF as well as the prevalence of the post-CTR TF on the affected digits.
RESULTS
Ten studies with total 10,399 participants in 9 studies and 875 operated hands in one article were included for meta-analysis. CTR significantly increases the risk of following TF occurrence (odds ratio=2.67; 95% CI 2.344-3.043; P <0.001). The pooled incidence of TF development after CTR was 7.7%. Women were more likely to develop a TF after CTR surgery (odds ratio=2.02; 95% CI 1.054-3.873; P =0.034). Finally, the thumb was the most susceptible fingers, followed by middle and ring fingers.
CONCLUSIONS
High incidence of TF comes after CTR, and women were more susceptible than man. Clinicians were suggested to notice the potential risk of TF after CTR in clinical practice.
LEVEL OF EVIDENCE
Level III, meta-analysis.
Topics: Male; Humans; Female; Incidence; Carpal Tunnel Syndrome; Risk Factors; Trigger Finger Disorder; Thumb
PubMed: 37161585
DOI: 10.1097/JS9.0000000000000450 -
Journal of Digital Imaging Jun 2023Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical... (Review)
Review
Use of Deep Neural Networks in the Detection and Automated Classification of Lesions Using Clinical Images in Ophthalmology, Dermatology, and Oral Medicine-A Systematic Review.
Artificial neural networks (ANN) are artificial intelligence (AI) techniques used in the automated recognition and classification of pathological changes from clinical images in areas such as ophthalmology, dermatology, and oral medicine. The combination of enterprise imaging and AI is gaining notoriety for its potential benefits in healthcare areas such as cardiology, dermatology, ophthalmology, pathology, physiatry, radiation oncology, radiology, and endoscopic. The present study aimed to analyze, through a systematic literature review, the application of performance of ANN and deep learning in the recognition and automated classification of lesions from clinical images, when comparing to the human performance. The PRISMA 2020 approach (Preferred Reporting Items for Systematic Reviews and Meta-analyses) was used by searching four databases of studies that reference the use of IA to define the diagnosis of lesions in ophthalmology, dermatology, and oral medicine areas. A quantitative and qualitative analyses of the articles that met the inclusion criteria were performed. The search yielded the inclusion of 60 studies. It was found that the interest in the topic has increased, especially in the last 3 years. We observed that the performance of IA models is promising, with high accuracy, sensitivity, and specificity, most of them had outcomes equivalent to human comparators. The reproducibility of the performance of models in real-life practice has been reported as a critical point. Study designs and results have been progressively improved. IA resources have the potential to contribute to several areas of health. In the coming years, it is likely to be incorporated into everyday life, contributing to the precision and reducing the time required by the diagnostic process.
Topics: Humans; Artificial Intelligence; Reproducibility of Results; Ophthalmology; Dermatology; Neural Networks, Computer
PubMed: 36650299
DOI: 10.1007/s10278-023-00775-3 -
NPJ Digital Medicine Apr 2023Pain is a complex and personal experience that presents diverse measurement challenges. Different sensing technologies can be used as a surrogate measure of pain to... (Review)
Review
Pain is a complex and personal experience that presents diverse measurement challenges. Different sensing technologies can be used as a surrogate measure of pain to overcome these challenges. The objective of this review is to summarise and synthesise the published literature to: (a) identify relevant non-invasive physiological sensing technologies that can be used for the assessment of human pain, (b) describe the analytical tools used in artificial intelligence (AI) to decode pain data collected from sensing technologies, and (c) describe the main implications in the application of these technologies. A literature search was conducted in July 2022 to query PubMed, Web of Sciences, and Scopus. Papers published between January 2013 and July 2022 are considered. Forty-eight studies are included in this literature review. Two main sensing technologies (neurological and physiological) are identified in the literature. The sensing technologies and their modality (unimodal or multimodal) are presented. The literature provided numerous examples of how different analytical tools in AI have been applied to decode pain. This review identifies different non-invasive sensing technologies, their analytical tools, and the implications for their use. There are significant opportunities to leverage multimodal sensing and deep learning to improve accuracy of pain monitoring systems. This review also identifies the need for analyses and datasets that explore the inclusion of neural and physiological information together. Finally, challenges and opportunities for designing better systems for pain assessment are also presented.
PubMed: 37100924
DOI: 10.1038/s41746-023-00810-1 -
Cells Mar 2022In 2020, 55 million people worldwide were living with dementia, and this number is projected to reach 139 million in 2050. However, approximately 75% of people living... (Meta-Analysis)
Meta-Analysis Review
In 2020, 55 million people worldwide were living with dementia, and this number is projected to reach 139 million in 2050. However, approximately 75% of people living with dementia have not received a formal diagnosis. Hence, they do not have access to treatment and care. Without effective treatment in the foreseeable future, it is essential to focus on modifiable risk factors and early intervention. Central auditory processing is impaired in people diagnosed with Alzheimer's disease (AD) and its preclinical stages and may manifest many years before clinical diagnosis. This study systematically reviewed central auditory processing function in AD and its preclinical stages using behavioural central auditory processing tests. Eleven studies met the full inclusion criteria, and seven were included in the meta-analyses. The results revealed that those with mild cognitive impairment perform significantly worse than healthy controls within channel adaptive tests of temporal response (ATTR), time-compressed speech test (TCS), Dichotic Digits Test (DDT), Dichotic Sentence Identification (DSI), Speech in Noise (SPIN), and Synthetic Sentence Identification-Ipsilateral Competing Message (SSI-ICM) central auditory processing tests. In addition, this analysis indicates that participants with AD performed significantly worse than healthy controls in DDT, DSI, and SSI-ICM tasks. Clinical implications are discussed in detail.
Topics: Humans; Alzheimer Disease; Cognitive Dysfunction; Hearing
PubMed: 35326458
DOI: 10.3390/cells11061007 -
Digital Health 2023Musculoskeletal conditions are the leading cause of disability worldwide. Telerehabilitation may be a viable option in the management of these conditions, facilitating... (Review)
Review
BACKGROUND
Musculoskeletal conditions are the leading cause of disability worldwide. Telerehabilitation may be a viable option in the management of these conditions, facilitating access and patient adherence. Nevertheless, the impact of biofeedback-assisted asynchronous telerehabilitation remains unknown.
OBJECTIVE
To systematically review and assess the effectiveness of exercise-based asynchronous biofeedback-assisted telerehabilitation on pain and function in individuals with musculoskeletal conditions.
METHODS
This systematic review followed Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. The search was conducted using three databases: PubMed, Scopus, and PEDro. Study criteria included articles written in English and published from January 2017 to August 2022, reporting interventional trials evaluating exercise-based asynchronous telerehabilitation using biofeedback in adults with musculoskeletal disorders. The risks of bias and certainty of evidence were appraised using the Cochrane tool and Grading of Recommendations, Assessment, Development, and Evaluation (GRADE), respectively. The results are narratively summarized, and the effect sizes of the main outcomes were calculated.
RESULTS
Fourteen trials were included: 10 using motion tracker technology ( = 1284) and four with camera-based biofeedback ( = 467). Telerehabilitation with motion trackers yields at least similar improvements in pain and function in people with musculoskeletal conditions (effect sizes: 0.19-1.45; low certainty of evidence). Uncertain evidence exists for the effectiveness of camera-based telerehabilitation (effect sizes: 0.11-0.13; very low evidence). No study found superior results in a control group.
CONCLUSIONS
Asynchronous telerehabilitation may be an option in the management of musculoskeletal conditions. Considering its potential for scalability and access democratization, additional high-quality research is needed to address long-term outcomes, comparativeness, and cost-effectiveness and identify treatment responders.
PubMed: 37325077
DOI: 10.1177/20552076231176696 -
NPJ Digital Medicine Mar 2022Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive,... (Review)
Review
Accurate and objective performance assessment is essential for both trainees and certified surgeons. However, existing methods can be time consuming, labor intensive, and subject to bias. Machine learning (ML) has the potential to provide rapid, automated, and reproducible feedback without the need for expert reviewers. We aimed to systematically review the literature and determine the ML techniques used for technical surgical skill assessment and identify challenges and barriers in the field. A systematic literature search, in accordance with the PRISMA statement, was performed to identify studies detailing the use of ML for technical skill assessment in surgery. Of the 1896 studies that were retrieved, 66 studies were included. The most common ML methods used were Hidden Markov Models (HMM, 14/66), Support Vector Machines (SVM, 17/66), and Artificial Neural Networks (ANN, 17/66). 40/66 studies used kinematic data, 19/66 used video or image data, and 7/66 used both. Studies assessed the performance of benchtop tasks (48/66), simulator tasks (10/66), and real-life surgery (8/66). Accuracy rates of over 80% were achieved, although tasks and participants varied between studies. Barriers to progress in the field included a focus on basic tasks, lack of standardization between studies, and lack of datasets. ML has the potential to produce accurate and objective surgical skill assessment through the use of methods including HMM, SVM, and ANN. Future ML-based assessment tools should move beyond the assessment of basic tasks and towards real-life surgery and provide interpretable feedback with clinical value for the surgeon.PROSPERO: CRD42020226071.
PubMed: 35241760
DOI: 10.1038/s41746-022-00566-0 -
NPJ Digital Medicine Apr 2023Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However,... (Review)
Review
Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.
PubMed: 37100953
DOI: 10.1038/s41746-023-00811-0 -
Journal of the American College of... Aug 2022Digital nerve blocks (DNBs) provide local anesthesia for minor procedures of the digits. Several DNB techniques have been described, but it is unclear which technique...
STUDY OBJECTIVE
Digital nerve blocks (DNBs) provide local anesthesia for minor procedures of the digits. Several DNB techniques have been described, but it is unclear which technique provides adequate anesthesia with the least pain. DNB techniques can be grouped into a dorsal approach, which requires 2 injections, versus 3 different types of volar approaches, which require a single injection. We performed a meta-analysis to compare DNB techniques with respect to time to anesthesia (TTA), duration of anesthesia (DOA), and pain of injection. We also reviewed data on degree and distribution of anesthesia and discuss the techniques preferred by study participants and clinicians performing injections.
DATA SOURCES
We searched MEDLINE, EMBASE, and CENTRAL databases with terms "digital block," "digital nerve block," "local anesthetic," "local anesthesia," "lidocaine," and/or "bupivacaine."
STUDY SELECTION
Randomized controlled trials (RCTs) were prioritized, though high-quality prospective cohort studies were also eligible. All included studies evaluated DNB techniques or anesthetics. There were 23 papers (21 RCTs, 2 prospective descriptive studies) included.
DATA EXTRACTION
DNBs studied included dorsal ring block, traditional dorsal block, transthecal block, modified transthecal block, and volar subcutaneous digital blocks. Outcomes measured included TTA, DOA, pain of injection scores, and degree of anesthesia.
RESULTS
Overall, mean TTA was 4.5 minutes (95% confidence interval [CI] 3.5, 5.6), mean DOA was 187 minutes (95% CI 104.3, 269.7), and mean pain score was 2.1 out of 10 (95% CI 1.3, 2.8) without significant differences between studies or techniques.
CONCLUSIONS
There were no significant differences in the outcomes of TTA, DOA, and pain of injection between different DNB techniques. Single-injection volar approaches may be preferred by participants and clinicians over dorsal approaches that require 2 injections, particularly with respect to pain. However, 2-injection dorsal approaches may have better coverage of the proximal dorsal surface based on degree and distribution of anesthesia.
PubMed: 35795710
DOI: 10.1002/emp2.12753 -
Digital Health 2023There is growing evidence to suggest that EHRs may be associated with clinician stress and burnout, which could hamper their effective use and introduce risks to patient... (Review)
Review
BACKGROUND
There is growing evidence to suggest that EHRs may be associated with clinician stress and burnout, which could hamper their effective use and introduce risks to patient safety.
OBJECTIVE
This systematic review aimed to examine the association between EHR use and clinicians' stress and burnout in hospital settings, and to identify the contributing factors influencing this relationship.
METHODS
The search included peer-reviewed published studies between 2000 and 2023 in English in CINAHL, Ovid Medline, Embase, and PsychINFO. Studies that provided specific data regarding clinicians' stress and/or burnout related to EHRs in hospitals were included. A quality assessment of included studies was conducted.
RESULTS
Twenty-nine studies were included (25 cross-sectional surveys, one qualitative study, and three mixed methods), which focused on physicians (n = 18), nurses (n = 10) and mixed professions (n = 3). Usability issues and the amount of time spent on the EHR were the most significant predictors, but intensity of the working environment influenced high EHR-related workload and thereby also contributed to stress and burnout. The differences in clinicians' specialties influenced the levels of stress and burnout related to EHRs.
CONCLUSIONS
This systematic review showed that EHR use was a perceived contributor to clinicians' stress and burnout in hospitals, primarily driven by poor usability and excessive time spent on EHRs. Addressing these issues requires tailored EHR systems, rigorous usability testing, support for the needs of different specialities, qualitative research on EHR stressors, and expanded research in Non-Western contexts.
PubMed: 38130797
DOI: 10.1177/20552076231220241