-
Computational and Mathematical Methods... 2022Breast cancer incidence has been rising steadily during the past few decades. It is the second leading cause of death in women. If it is diagnosed early, there is a good... (Comparative Study)
Comparative Study
Breast cancer incidence has been rising steadily during the past few decades. It is the second leading cause of death in women. If it is diagnosed early, there is a good possibility of recovery. Mammography is proven to be an excellent screening technique for breast tumor diagnosis, but its detection and classification in mammograms remain a significant challenge. Previous studies' major limitation is an increase in false positive ratio (FPR) and false negative ratio (FNR), as well as a drop in Matthews correlation coefficient (MCC) value. A model that can lower FPR and FNR while increasing MCC value is required. To overcome prior research limitations, a modified network of YOLOv5 is used in this study to detect and classify breast tumors. Our research is conducted using publicly available datasets Curated Breast Imaging Subset of DDSM (CBIS-DDSM). The first step is to perform preprocessing, which includes image enhancing techniques and the removal of pectoral muscles and labels. The dataset is then annotated, augmented, and divided into 60% for training, 30% for validation, and 10% for testing. The experiment is then performed using a batch size of 8, a learning rate of 0.01, a momentum of 0.843, and an epoch value of 300. To evaluate the performance of our proposed model, our proposed model is compared with YOLOv3 and faster RCNN. The results show that our proposed model performs better than YOLOv3 and faster RCNN with 96% mAP, 93.50% MCC value, 96.50% accuracy, 0.04 FPR, and 0.03 FNR value. The results show that our suggested model successfully identifies and classifies breast tumors while also overcoming previous research limitations by lowering the FPR and FNR and boosting the MCC value.
Topics: Breast; Breast Neoplasms; Computational Biology; Databases, Factual; Diagnosis, Computer-Assisted; False Negative Reactions; False Positive Reactions; Female; Humans; Machine Learning; Mammography; Neural Networks, Computer; Radiographic Image Enhancement; Sensitivity and Specificity
PubMed: 35027940
DOI: 10.1155/2022/1359019 -
Proceedings of the National Academy of... Jan 2020Engaging in altruistic behaviors is costly, but it contributes to the health and well-being of the performer of such behaviors. The present research offers a take on how...
Engaging in altruistic behaviors is costly, but it contributes to the health and well-being of the performer of such behaviors. The present research offers a take on how this paradox can be understood. Across 2 pilot studies and 3 experiments, we showed a pain-relieving effect of performing altruistic behaviors. Acting altruistically relieved not only acutely induced physical pain among healthy adults but also chronic pain among cancer patients. Using functional MRI, we found that after individuals performed altruistic actions brain activity in the dorsal anterior cingulate cortex and bilateral insula in response to a painful shock was significantly reduced. This reduced pain-induced activation in the right insula was mediated by the neural activity in the ventral medial prefrontal cortex (VMPFC), while the activation of the VMPFC was positively correlated with the performer's experienced meaningfulness from his or her altruistic behavior. Our findings suggest that incurring personal costs to help others may buffer the performers from unpleasant conditions.
Topics: Adult; Aged; Altruism; Brain; Brain Mapping; Cerebral Cortex; Female; Gyrus Cinguli; Humans; Magnetic Resonance Imaging; Male; Middle Aged; Nervous System Physiological Phenomena; Pain; Pilot Projects; Prefrontal Cortex; Young Adult
PubMed: 31888986
DOI: 10.1073/pnas.1911861117 -
Pediatric Research Oct 2022Lung ultrasound (LUS) for critical patients requires trained operators to perform them, though little information exists on the level of training required for...
BACKGROUND
Lung ultrasound (LUS) for critical patients requires trained operators to perform them, though little information exists on the level of training required for independent practice. The aims were to implement a training plan for diagnosing pneumonia using LUS and to analyze the inter-observer agreement between senior radiologists (SRs) and pediatric intensive care physicians (PICPs).
METHODS
Prospective longitudinal and interventional study conducted in the Pediatric Intensive Care Unit of a tertiary hospital. Following a theoretical and practical training plan regarding diagnosing pneumonia using LUS, the concordance between SRs and the PICPs on their LUS reports was analyzed.
RESULTS
Nine PICPs were trained and tested on both theoretical and practical LUS knowledge. The mean exam mark was 13.5/15. To evaluate inter-observer agreement, a total of 483 LUS were performed. For interstitial syndrome, the global Kappa coefficient (K) was 0.51 (95% CI 0.43-0.58). Regarding the presence of consolidation, K was 0.67 (95% CI 0.53-0.78), and for the consolidation pattern, K was 0.82 (95% CI 0.79-0.85), showing almost perfect agreement.
CONCLUSIONS
Our training plan allowed PICPs to independently perform LUS and might improve pneumonia diagnosis. We found a high inter-observer agreement between PICPs and SRs in detecting the presence and type of consolidation on LUS.
IMPACT
Lung ultrasound (LUS) has been proposed as an alternative to diagnose pneumonia in children. However, the adoption of LUS in clinical practice has been slow, and it is not yet included in general clinical guidelines. The results of this study show that the implementation of a LUS training program may improve pneumonia diagnosis in critically ill patients. The training program's design, implementation, and evaluation are described. The high inter-observer agreement between LUS reports from the physicians trained and expert radiologists encourage the use of LUS not only for pneumonia diagnosis, but also for discerning bacterial and viral patterns.
Topics: Child; Humans; Prospective Studies; Pneumonia; Lung; Ultrasonography; Lung Diseases
PubMed: 34969992
DOI: 10.1038/s41390-021-01928-2 -
BJS Open Jul 2021Eye-tracking offers a new list of performance measures for surgeons. Previous studies of eye-tracking have reported that action-related fixation is a good measuring tool...
BACKGROUND
Eye-tracking offers a new list of performance measures for surgeons. Previous studies of eye-tracking have reported that action-related fixation is a good measuring tool for elite task performers. Other measures, including early eye engagement to target and early eye disengagement from the previous subtask, were also reported to distinguish between different expertise levels. These parameters were examined during laparoscopic surgery simulations in the present study, with a goal to identify the most useful measures for distinguishing surgical expertise.
METHODS
Surgical operators, including experienced surgeons (expert), residents (intermediate), and university students (novice), were required to perform a laparoscopic task involving reaching, grasping, and loading, while their eye movements and performance videos were recorded. Spatiotemporal features of eye-hand coordination and action-related fixation were calculated and compared among the groups.
RESULTS
The study included five experienced surgeons, seven residents, and 14 novices. Overall, experts performed tasks faster than novices. Examining eye-hand coordination on each subtask, it was found that experts managed to disengage their eyes earlier from the previous subtask, whereas novices disengaged their eyes from previous subtask with a significant delay. Early eye engagement to the current subtask was observed for all operators. There was no difference in action-related fixation between experienced surgeons and novices. Disengage time was strongly associated with the surgical experience score of the operators, better than both early-engage time and action-related fixation.
CONCLUSION
The spatiotemporal features of surgeons' eye-hand coordination can be used to assess level of surgical experience.
Topics: Clinical Competence; Eye Movements; Humans; Laparoscopy; Surgeons
PubMed: 34476467
DOI: 10.1093/bjsopen/zrab068 -
BMC Medical Informatics and Decision... Aug 2023Differentiating between Crohn's disease (CD) and intestinal tuberculosis (ITB) with endoscopy is challenging. We aim to perform more accurate endoscopic diagnosis...
BACKGROUND
Differentiating between Crohn's disease (CD) and intestinal tuberculosis (ITB) with endoscopy is challenging. We aim to perform more accurate endoscopic diagnosis between CD and ITB by building a trustworthy AI differential diagnosis application.
METHODS
A total of 1271 electronic health record (EHR) patients who had undergone colonoscopies at Peking Union Medical College Hospital (PUMCH) and were clinically diagnosed with CD (n = 875) or ITB (n = 396) were used in this study. We build a workflow to make diagnoses with EHRs and mine differential diagnosis features; this involves finetuning the pretrained language models, distilling them into a light and efficient TextCNN model, interpreting the neural network and selecting differential attribution features, and then adopting manual feature checking and carrying out debias training.
RESULTS
The accuracy of debiased TextCNN on differential diagnosis between CD and ITB is 0.83 (CR F1: 0.87, ITB F1: 0.77), which is the best among the baselines. On the noisy validation set, its accuracy was 0.70 (CR F1: 0.87, ITB: 0.69), which was significantly higher than that of models without debias. We also find that the debiased model more easily mines the diagnostically significant features. The debiased TextCNN unearthed 39 diagnostic features in the form of phrases, 17 of which were key diagnostic features recognized by the guidelines.
CONCLUSION
We build a trustworthy AI differential diagnosis application for differentiating between CD and ITB focusing on accuracy, interpretability and robustness. The classifiers perform well, and the features which had statistical significance were in agreement with clinical guidelines.
Topics: Humans; Crohn Disease; Diagnosis, Differential; Tuberculosis, Gastrointestinal; Colonoscopy
PubMed: 37582768
DOI: 10.1186/s12911-023-02257-6 -
Computer Methods and Programs in... Oct 2022Nowadays, COVID-19 is spreading rapidly worldwide, and seriously threatening lives . From the perspective of security and economy, the effective control of COVID-19 has...
OBJECTIVE
Nowadays, COVID-19 is spreading rapidly worldwide, and seriously threatening lives . From the perspective of security and economy, the effective control of COVID-19 has a profound impact on the entire society. An effective strategy is to diagnose earlier to prevent the spread of the disease and prompt treatment of severe cases to improve the chance of survival.
METHODS
The method of this paper is as follows: Firstly, the collected data set is processed by chest film image processing, and the bone removal process is carried out in the rib subtraction module. Then, the set preprocessing method performed histogram equalization, sharpening, and other preprocessing operations on the chest film. Finally, shallow and high-level feature mapping through the backbone network extracts the processed chest radiographs. We implement the self-attention mechanism in Inception-Resnet, perform the standard classification, and identify chest radiograph diseases through the classifier to realize the auxiliary COVID-19 diagnosis process at the medical level, all in an effort to further enhance the classification performance of the convolutional neural network. Numerous computer simulations demonstrate that the Inception-Resnet convolutional neural network performs CT image categorization and enhancement with greater efficiency and flexibility than conventional segmentation techniques.
RESULTS
The experimental COVID-19 CT dataset obtained in this paper is the new data for CT scans and medical imaging of normal, early COVID-19 patients and severe COVID-19 patients from Jinyintan hospital. The experiment plots the relationship between model accuracy, model loss and epoch, using ACC, TPR, SPE, F1 score and G-mean to measure the image maps of patients with and without the disease. Statistical measurement values are obtained by Inception-Resnet are 88.23%, 83.45%, 89.72%, 95.53% and 88.74%. The experimental results show that Inception-Resnet plays a more effective role than other image classification methods in evaluation indicators, and the method has higher robustness, accuracy and intuitiveness.
CONCLUSION
With CT images in the clinical diagnosis of COVID-19 images being widely used and the number of applied samples continuously increasing, the method in this paper is expected to become an additional diagnostic tool that can effectively improve the diagnostic accuracy of clinical COVID-19 images.
Topics: COVID-19; COVID-19 Testing; Humans; Image Processing, Computer-Assisted; Lung; Neural Networks, Computer
PubMed: 35964421
DOI: 10.1016/j.cmpb.2022.107053 -
Journal of Translational Medicine Jan 2021Hysteroscopy is a commonly used technique for diagnosing endometrial lesions. It is essential to develop an objective model to aid clinicians in lesion diagnosis, as...
BACKGROUND
Hysteroscopy is a commonly used technique for diagnosing endometrial lesions. It is essential to develop an objective model to aid clinicians in lesion diagnosis, as each type of lesion has a distinct treatment, and judgments of hysteroscopists are relatively subjective. This study constructs a convolutional neural network model that can automatically classify endometrial lesions using hysteroscopic images as input.
METHODS
All histopathologically confirmed endometrial lesion images were obtained from the Shengjing Hospital of China Medical University, including endometrial hyperplasia without atypia, atypical hyperplasia, endometrial cancer, endometrial polyps, and submucous myomas. The study included 1851 images from 454 patients. After the images were preprocessed (histogram equalization, addition of noise, rotations, and flips), a training set of 6478 images was input into a tuned VGGNet-16 model; 250 images were used as the test set to evaluate the model's performance. Thereafter, we compared the model's results with the diagnosis of gynecologists.
RESULTS
The overall accuracy of the VGGNet-16 model in classifying endometrial lesions is 80.8%. Its sensitivity to endometrial hyperplasia without atypia, atypical hyperplasia, endometrial cancer, endometrial polyp, and submucous myoma is 84.0%, 68.0%, 78.0%, 94.0%, and 80.0%, respectively; for these diagnoses, the model's specificity is 92.5%, 95.5%, 96.5%, 95.0%, and 96.5%, respectively. When classifying lesions as benign or as premalignant/malignant, the VGGNet-16 model's accuracy, sensitivity, and specificity are 90.8%, 83.0%, and 96.0%, respectively. The diagnostic performance of the VGGNet-16 model is slightly better than that of the three gynecologists in both classification tasks. With the aid of the model, the overall accuracy of the diagnosis of endometrial lesions by gynecologists can be improved.
CONCLUSIONS
The VGGNet-16 model performs well in classifying endometrial lesions from hysteroscopic images and can provide objective diagnostic evidence for hysteroscopists.
Topics: China; Deep Learning; Endometrial Hyperplasia; Endometrial Neoplasms; Female; Humans; Hysteroscopy; Pregnancy; Sensitivity and Specificity; Uterine Diseases
PubMed: 33407588
DOI: 10.1186/s12967-020-02660-x -
Academic Emergency Medicine : Official... Sep 2017The use of ultrasonography (US) to diagnose appendicitis is well established. More recently, point-of-care ultrasonography (POCUS) has also been studied for the... (Meta-Analysis)
Meta-Analysis Review
BACKGROUND
The use of ultrasonography (US) to diagnose appendicitis is well established. More recently, point-of-care ultrasonography (POCUS) has also been studied for the diagnosis of appendicitis, which may also prove a valuable diagnostic tool. The purpose of this study was through systematic review and meta-analysis to identify the test characteristics of POCUS, specifically US performed by a nonradiologist physician, in accurately diagnosing acute appendicitis in patients of any age.
METHODS
We conducted a thorough and systematic literature search of English language articles published on point-of-care, physician-performed transabdominal US used for the diagnosis of acute appendicitis from 1980 to May, 2015 using OVID MEDLINE In-Process & Other Non-indexed Citations and Scopus. Studies were selected and subsequently independently abstracted by two trained reviewers. A random-effects pooled analysis was used to construct a hierarchical summary receiver operator characteristic curve, and a meta-regression was performed. Quality of studies was assessed using the QUADAS-2 tool.
RESULTS
Our search yielded 5,792 unique studies and we included 21 of these in our final review. Prevalence of disease in this study was 29.8%, (range = 6.4%-75.4%). The sensitivity and specificity for POCUS in diagnosing appendicitis were 91% (95% confidence interval [CI] = 83%-96%) and 97% (95% CI = 91%-99%), respectively. The positive and negative predictive values were 91 and 94%, respectively. Studies performed by emergency physicians had slightly lower test characteristics (sensitivity = 80%, specificity = 92%). There was significant heterogeneity between studies (I = 99%, 95% CI = 99%-100%) and the quality of the reported studies was moderate, mostly due to unclear reporting of blinding of physicians and timing of scanning and patient enrollment. Several of the studies were performed by a single operator, and the education and training of the operators were variably reported.
CONCLUSION
Point-of-care US has relatively high sensitivity and specificity for diagnosing acute appendicitis, although the data presented are limited by the quality of the original studies and large CIs. In the hands of an experienced operator, POCUS is an appropriate initial imaging modality for diagnosing appendicitis. Based on our results, it is premature to utilize POCUS as a stand-alone test or to rule out appendicitis.
Topics: Acute Disease; Appendicitis; Humans; Point-of-Care Systems; ROC Curve; Sensitivity and Specificity; Ultrasonography
PubMed: 28464459
DOI: 10.1111/acem.13212 -
International Journal of Clinical... Nov 2022Sinusoidal obstruction syndrome (SOS) refers to liver injury caused by hematopoietic stem cell transplantation (HSCT) and anticancer drugs including oxaliplatin.... (Observational Study)
Observational Study
BACKGROUND
Sinusoidal obstruction syndrome (SOS) refers to liver injury caused by hematopoietic stem cell transplantation (HSCT) and anticancer drugs including oxaliplatin. Increased splenic volume (SV) on computed tomography (CT) indicates oxaliplatin-induced SOS. Similarly, ultrasonography and liver stiffness measurement (LSM) by shear-wave elastography (SWE) can help diagnose SOS after HSCT; however, their usefulness for diagnosing oxaliplatin-induced SOS remains unclear. We investigated the usefulness of the Hokkaido ultrasonography-based scoring system with 10 ultrasonographic parameters (HokUS-10) and SWE in diagnosing oxaliplatin-induced SOS early.
METHODS
In this prospective observational study, ultrasonography and SWE were performed before and at 2, 4, and 6 months after oxaliplatin-based chemotherapy. HokUS-10 was used for assessment. CT volumetry of the SV was performed in clinical practice, and an SV increase ≥ 30% was considered the diagnostic indicator of oxaliplatin-induced SOS. We assessed whether HokUS-10 and SWE can lead to an early detection of oxaliplatin-induced SOS before an increased SV on CT.
RESULTS
Of the 30 enrolled patients with gastrointestinal cancers, 12 (40.0%) with an SV increase ≥ 30% on CT were diagnosed with SOS. The HokUS-10 score was not correlated with an SV increase ≥ 30% (r = 0.18). The change in rate of three HokUS-10 parameters were correlated with an SV increase ≥ 30% (r = 0.32-0.41). The change in rate of LSM by SWE was correlated with an SV increase ≥ 30% (r = 0.40).
CONCLUSIONS
The usefulness of HokUS-10 score was not demonstrated; however, some HokUS-10 parameters and SWE could be useful for the early diagnosis of oxaliplatin-induced SOS.
Topics: Humans; Hepatic Veno-Occlusive Disease; Oxaliplatin; Elasticity Imaging Techniques; Ultrasonography; Antineoplastic Agents
PubMed: 36042137
DOI: 10.1007/s10147-022-02235-4 -
Frontiers in Oncology 2023Pancreatic cystic neoplasms are increasingly diagnosed with the development of medical imaging technology and people's self-care awareness. However, two of their...
BACKGROUND
Pancreatic cystic neoplasms are increasingly diagnosed with the development of medical imaging technology and people's self-care awareness. However, two of their sub-types, serous cystic neoplasms (SCN) and mucinous cystic neoplasms (MCN), are often misclassified from each other. Because SCN is primarily benign and MCN has a high rate of malignant transformation. Distinguishing SCN and MCN is challenging and essential.
PURPOSE
MRIs have many different modalities, complete with SCN and MCN diagnosis information. With the help of an artificial intelligence-based algorithm, we aimed to propose a multi-modal hybrid deep learning network that can efficiently diagnose SCN and MCN using multi-modality MRIs.
METHODS
A cross-modal feature fusion structure was innovatively designed, combining features of seven modalities to realize the classification of SCN and MCN. 69 Patients with multi-modalities of MRIs were included, and experiments showed performances of every modality.
RESULTS
The proposed method with the optimized settings outperformed all other techniques and human radiologists with high accuracy of 75.07% and an AUC of 82.77%. Besides, the proposed disentanglement method outperformed other fusion methods, and delayed contrast-enhanced T1-weighted MRIs proved most valuable in diagnosing SCN and MCN.
CONCLUSIONS
Through the use of a contemporary artificial intelligence algorithm, physicians can attain high performance in the complex challenge of diagnosing SCN and MCN, surpassing human radiologists to a significant degree.
PubMed: 37795452
DOI: 10.3389/fonc.2023.1181270