-
Biotechnology Advances 2024The bioprocessing industry is undergoing a significant transformation in its approach to quality assurance, shifting from the traditional Quality by Testing (QbT) to... (Review)
Review
The bioprocessing industry is undergoing a significant transformation in its approach to quality assurance, shifting from the traditional Quality by Testing (QbT) to Quality by Design (QbD). QbD, a systematic approach to quality in process development, integrates quality into process design and control, guided by regulatory frameworks. This paradigm shift enables increased operational efficiencies, reduced market time, and ensures product consistency. The implementation of QbD is framed around key elements such as defining the Quality Target Product Profile (QTPPs), identifying Critical Quality Attributes (CQAs), developing Design Spaces (DS), establishing Control Strategies (CS), and maintaining continual improvement. The present critical analysis delves into the intricacies of each element, emphasizing their role in ensuring consistent product quality and regulatory compliance. The integration of Industry 4.0 and 5.0 technologies, including Artificial Intelligence (AI), Machine Learning (ML), Internet of Things (IoT), and Digital Twins (DTs), is significantly transforming the bioprocessing industry. These innovations enable real-time data analysis, predictive modelling, and process optimization, which are crucial elements in QbD implementation. Among these, the concept of DTs is notable for its ability to facilitate bi-directional data communication and enable real-time adjustments and therefore optimize processes. DTs, however, face implementation challenges such as system integration, data security, and hardware-software compatibility. These challenges are being addressed through advancements in AI, Virtual Reality/ Augmented Reality (VR/AR), and improved communication technologies. Central to the functioning of DTs is the development and application of various models of differing types - mechanistic, empirical, and hybrid. These models serve as the intellectual backbone of DTs, providing a framework for interpreting and predicting the behaviour of their physical counterparts. The choice and development of these models are vital for the accuracy and efficacy of DTs, enabling them to mirror and predict the real-time dynamics of bioprocessing systems. Complementing these models, advancements in data collection technologies, such as free-floating wireless sensors and spectroscopic sensors, enhance the monitoring and control capabilities of DTs, providing a more comprehensive and nuanced understanding of the bioprocessing environment. This review offers a critical analysis of the prevailing trends in model-based bioprocessing development within the sector.
Topics: Biotechnology; Machine Learning; Quality Control; Artificial Intelligence; Internet of Things
PubMed: 38754797
DOI: 10.1016/j.biotechadv.2024.108378 -
AEM Education and Training Jun 2024With a rise in mass casualty incidents, training in hemorrhage control using tourniquets has been championed as a basic-and lifesaving-procedure for bystanders and...
BACKGROUND
With a rise in mass casualty incidents, training in hemorrhage control using tourniquets has been championed as a basic-and lifesaving-procedure for bystanders and medical professionals alike. The current standard for training is in-person (IP) courses, which can be limited based on instructor availability. Virtual reality (VR) has demonstrated the potential to improve the accuracy of certain medical tasks but has not yet been developed for hemorrhage control. The objective of this study was to evaluate the efficacy of a VR hemorrhage trainer in learner retention of tourniquet application when compared to traditional IP instructor teaching among a cohort of emergency medicine residents practicing in a Level I trauma center.
METHODS
This was a prospective, observational study of 53 emergency medicine residents at an inner-city program. Participants were randomly assigned to either the control or the VR group. On Day 0, all residents underwent a training session (IP vs. VR) for the proper, stepwise application of a tourniquet, as defined by the American College of Trauma Surgeons. Each participant was then assessed on the application of a tourniquet by a blinded instructor using the National Registry Hemorrhage Control Skills Lab rubric. After 3 months, each resident was reevaluated on the same rubric, with subsequent data analysis on successful tourniquet placement (measured as under 90 s) and time to completion.
RESULTS
Of the 53 participants, the IP training group had an initial pass rate of 97% (28/29) compared to 92% (22/24) in the VR group ( = 0.58). On retention testing, the IP training group had a pass rate of 95% (20/21) compared to 90% (18/20) in the VR group ( = 0.62). Stratifying the success of tourniquet placement by level of resident training did not demonstrate any statistically significant differences.
CONCLUSIONS
In this pilot study of emergency medicine residents, we found no significant differences in successful hemorrhage control by tourniquet placement between those trained with VR compared to a traditional IP course among emergency medicine residents. While more studies with greater power are needed, the results suggest that VR may be a useful adjunct to traditional IP medical training.
PubMed: 38738183
DOI: 10.1002/aet2.10986 -
Neural Networks : the Official Journal... Aug 2024Garment transfer can wear the garment of the model image onto the personal image. As garment transfer leverages wild and cheap garment input, it has attracted tremendous...
Garment transfer can wear the garment of the model image onto the personal image. As garment transfer leverages wild and cheap garment input, it has attracted tremendous attention in the community and has a huge commercial potential. Since the ground truth of garment transfer is almost unavailable in reality, previous studies have treated garment transfer as either pose transfer or garment-pose disentanglement, and trained garment transfer in self-supervised learning, However, these implementation methods do not cover garment transfer intentions completely and face the robustness issue in the testing phase. Notably, virtual try-on technology has exhibited superior performance using self-supervised learning, we propose to supervise the garment transfer training via knowledge distillation from virtual try-on. Specifically, the overall pipeline is first to infer a garment transfer parsing, and to use it to guide downstream warping and inpainting tasks. The transfer parsing reasoning model learns the response and feature knowledge from the try-on parsing reasoning model and absorbs the hard knowledge from the ground truth. The progressive flow warping model learns the content knowledge from virtual try-on for a reasonable and precise garment warping. To enhance transfer realism, we propose an arm regrowth task to infer exposed skin. Experiments demonstrate that our method has state-of-the-art performance in transferring garments between persons compared with other virtual try-on and garment transfer methods.
Topics: Humans; Clothing; Neural Networks, Computer; Transfer, Psychology; Supervised Machine Learning; Knowledge
PubMed: 38733796
DOI: 10.1016/j.neunet.2024.106353 -
Journal of AAPOS : the Official... Jun 2024To assess the feasibility and performance of Vivid Vision Perimetry (VVP), a new virtual reality (VR)-based visual field platform. (Comparative Study)
Comparative Study
PURPOSE
To assess the feasibility and performance of Vivid Vision Perimetry (VVP), a new virtual reality (VR)-based visual field platform.
METHODS
Children 7-18 years of age with visual acuity of 20/80 or better undergoing Humphrey visual field (HVF) testing were recruited to perform VVP, a VR-based test that uses suprathreshold stimuli to test 54 field locations and calculates a fraction seen score. Pearson correlation coefficients were calculated to evaluate correlation between HVF mean sensitivity and VVP mean fraction seen scores. Participants were surveyed regarding their experience.
RESULTS
A total of 37 eyes of 23 participants (average age, 12.9 ± 3.1 years; 48% female) were included. All participants successfully completed VVP testing. Diagnoses included glaucoma (12), glaucoma suspect (7), steroid-induced ocular hypertension (3), and craniopharyngioma (1). Sixteen participants had prior HVF experience, and none had prior VVP experience, although 7 had previously used VR. Of the 23 HVF tests performed, 9 (39%) were unreliable due to fixation losses, false positives, or false negatives. Similarly, 35% of VVP tests were unreliable (as defined by accuracy of blind spot detection). Excluding unreliable HVF tests, the correlation between HVF average mean sensitivity and VVP mean fraction seen score was 0.48 (P = 0.02; 95% CI, 0.09-0.74). When asked about preference for the VVP or HVF examination, all participants favored the VVP, and 70% were "very satisfied" with VVP.
CONCLUSIONS
In our cohort of 23 pediatric subjects, VVP proved to be a clinically feasible VR-based visual field testing, which was uniformly preferred over HVF.
Topics: Humans; Visual Field Tests; Child; Female; Male; Pilot Projects; Adolescent; Visual Fields; Virtual Reality; Visual Acuity; Feasibility Studies; Glaucoma; Reproducibility of Results; Vision Disorders; Ocular Hypertension
PubMed: 38729256
DOI: 10.1016/j.jaapos.2024.103933 -
Journal of Medical Internet Research May 2024Attention-deficit/hyperactivity disorder (ADHD) is one of the most common neurodevelopmental disorders among children. Pharmacotherapy has been the primary treatment for... (Randomized Controlled Trial)
Randomized Controlled Trial
BACKGROUND
Attention-deficit/hyperactivity disorder (ADHD) is one of the most common neurodevelopmental disorders among children. Pharmacotherapy has been the primary treatment for ADHD, supplemented by behavioral interventions. Digital and exercise interventions are promising nonpharmacologic approaches for enhancing the physical and psychological health of children with ADHD. However, the combined impact of digital and exercise therapies remains unclear.
OBJECTIVE
The aim of this study was to determine whether BrainFit, a novel digital intervention combining gamified cognitive and exercise training, is efficacious in reducing ADHD symptoms and executive function (EF) among school-aged children with ADHD.
METHODS
This 4-week prospective randomized controlled trial included 90 children (6-12 years old) who visited the ADHD outpatient clinic and met the diagnostic criteria for ADHD. The participants were randomized (1:1) to the BrainFit intervention (n=44) or a waitlist control (n=46) between March and August 2022. The intervention consisted of 12 30-minute sessions delivered on an iPad over 4 weeks with 3 sessions per week (Monday, Wednesday, and Friday after school) under the supervision of trained staff. The primary outcomes were parent-rated symptoms of attention and hyperactivity assessed according to the Swanson, Nolan, and Pelham questionnaire (SNAP-IV) rating scale and EF skills assessed by the Behavior Rating Inventory of Executive Function (BRIEF) scale, evaluated pre and post intervention. Intention-to-treat analysis was performed on 80 children after attrition. A nonparametric resampling-based permutation test was used for hypothesis testing of intervention effects.
RESULTS
Among the 145 children who met the inclusion criteria, 90 consented and were randomized; ultimately, 80 (88.9%) children completed the study and were included in the analysis. The participants' average age was 8.4 (SD 1.3) years, including 63 (78.8%) male participants. The most common ADHD subtype was hyperactive/impulsive (54/80, 68%) and 23 (29%) children had severe symptoms. At the endpoint of the study, the BrainFit intervention group had a significantly larger improvement in total ADHD symptoms (SNAP-IV total score) as compared to those in the control group (β=-12.203, 95% CI -17.882 to -6.523; P<.001), owing to lower scores on the subscales Inattention (β=-3.966, 95% CI -6.285 to -1.647; P<.001), Hyperactivity/Impulsivity (β=-5.735, 95% CI -8.334 to -3.137; P<.001), and Oppositional Defiant Disorder (β=-2.995, 95% CI -4.857 to -1.132; P=.002). The intervention was associated with significant reduction in the Metacognition Index (β=-6.312, 95% CI -10.973 to -1.650; P=.006) and Global Executive Composite (β=-5.952, 95% CI -10.214 to -1.690; P=.003) on the BRIEF. No severe intervention-related adverse events were reported.
CONCLUSIONS
This novel digital cognitive-physical intervention was efficacious in school-age children with ADHD. A larger multicenter effectiveness trial with longer follow-up is warranted to confirm these findings and to assess the durability of treatment effects.
TRIAL REGISTRATION
Chinese Clinical Trial Register ChiCTR2300070521; https://www.chictr.org.cn/showproj.html?proj=177806.
Topics: Humans; Attention Deficit Disorder with Hyperactivity; Child; Male; Female; Executive Function; Prospective Studies; Cognitive Behavioral Therapy; Exercise Therapy; Treatment Outcome
PubMed: 38728075
DOI: 10.2196/55569 -
Frontiers in Pharmacology 2024Slow wave sleep (SWS) is highly relevant for verbal and non-verbal/spatial memory in healthy individuals, but also in people with epilepsy. However, contradictory...
Slow wave sleep (SWS) is highly relevant for verbal and non-verbal/spatial memory in healthy individuals, but also in people with epilepsy. However, contradictory findings exist regarding the effect of seizures on overnight memory retention, particularly relating to procedural and non-verbal memory, and thorough examination of episodic memory retention with ecologically valid tests is missing. This research explores the interaction of SWS duration with epilepsy-relevant factors, as well as the relation of spectral characteristics of SWS on overnight retention of procedural, verbal, and episodic memory. In an epilepsy monitoring unit, epilepsy patients (N = 40) underwent learning, immediate and 12 h delayed testing of memory retention for a fingertapping task (procedural memory), a word-pair task (verbal memory), and an innovative virtual reality task (episodic memory). We used multiple linear regression to examine the impact of SWS duration, spectral characteristics of SWS, seizure occurrence, medication, depression, seizure type, gender, and epilepsy duration on overnight memory retention. Results indicated that none of the candidate variables significantly predicted overnight changes for procedural memory performance. For verbal memory, the occurrence of tonic-clonic seizures negatively impacted memory retention and higher psychoactive medication load showed a tendency for lower verbal memory retention. Episodic memory was significantly impacted by epilepsy duration, displaying a potential nonlinear impact with a longer duration than 10 years negatively affecting memory performance. Higher drug load of anti-seizure medication was by tendency related to better overnight retention of episodic memory. Contrary to expectations longer SWS duration showed a trend towards decreased episodic memory performance. Analyses on associations between memory types and EEG band power during SWS revealed lower alpha-band power in the frontal right region as significant predictor for better episodic memory retention. In conclusion, this research reveals that memory modalities are not equally affected by important epilepsy factors such as duration of epilepsy and medication, as well as SWS spectral characteristics.
PubMed: 38725659
DOI: 10.3389/fphar.2024.1374760 -
Soins. Psychiatrie 2024Dreams can be seen as a way of letting your mind wander while you're awake, an act of imagination that occurs during sleep, or a more or less chimerical imaginary...
Dreams can be seen as a way of letting your mind wander while you're awake, an act of imagination that occurs during sleep, or a more or less chimerical imaginary representation of what you ardently hope for. In all three cases, it questions both our relationship with reality (what exists in itself) and with reality (what I perceive and understand of reality). From this point of view, dreams and madness are undeniably two experiences that radically question our access to reality.
Topics: Humans; Dreams; Imagination; Psychoanalytic Interpretation; Reality Testing
PubMed: 38719352
DOI: 10.1016/j.spsy.2024.03.005 -
Prehospital and Disaster Medicine May 2024Medical resuscitations in rugged prehospital settings require emergency personnel to perform high-risk procedures in low-resource conditions. Just-in-Time Guidance...
INTRODUCTION
Medical resuscitations in rugged prehospital settings require emergency personnel to perform high-risk procedures in low-resource conditions. Just-in-Time Guidance (JITG) utilizing augmented reality (AR) guidance may be a solution. There is little literature on the utility of AR-mediated JITG tools for facilitating the performance of emergent field care.
STUDY OBJECTIVE
The objective of this study was to investigate the feasibility and efficacy of a novel AR-mediated JITG tool for emergency field procedures.
METHODS
Emergency medical technician-basic (EMT-B) and paramedic cohorts were randomized to either video training (control) or JITG-AR guidance (intervention) groups for performing bag-valve-mask (BVM) ventilation, intraosseous (IO) line placement, and needle-decompression (Needle-d) in a medium-fidelity simulation environment. For the interventional condition, subjects used an AR technology platform to perform the tasks. The primary outcome was participant task performance; the secondary outcomes were participant-reported acceptability. Participant task score, task time, and acceptability ratings were reported descriptively and compared between the control and intervention groups using chi-square analysis for binary variables and unpaired t-testing for continuous variables.
RESULTS
Sixty participants were enrolled (mean age 34.8 years; 72% male). In the EMT-B cohort, there was no difference in average task performance score between the control and JITG groups for the BVM and IO tasks; however, the control group had higher performance scores for the Needle-d task (mean score difference 22%; P = .01). In the paramedic cohort, there was no difference in performance scores between the control and JITG group for the BVM and Needle-d tasks, but the control group had higher task scores for the IO task (mean score difference 23%; P = .01). For all task and participant types, the control group performed tasks more quickly than in the JITG group. There was no difference in participant usability or usefulness ratings between the JITG or control conditions for any of the tasks, although paramedics reported they were less likely to use the JITG equipment again (mean difference 1.96 rating points; P = .02).
CONCLUSIONS
This study demonstrated preliminary evidence that AR-mediated guidance for emergency medical procedures is feasible and acceptable. These observations, coupled with AR's promise for real-time interaction and on-going technological advancements, suggest the potential for this modality in training and practice that justifies future investigation.
PubMed: 38712485
DOI: 10.1017/S1049023X24000372 -
The Journal of Clinical Endocrinology... May 2024Essentially all individuals with multiple autoantibodies will develop clinical type 1 diabetes. Multiple AABs and normal glucose tolerance define Stage 1 diabetes;...
BACKGROUND
Essentially all individuals with multiple autoantibodies will develop clinical type 1 diabetes. Multiple AABs and normal glucose tolerance define Stage 1 diabetes; abnormal glucose tolerance defines Stage 2. However, the rate of progression within these stages is heterogeneous, necessitating personalized risk calculators to improve clinical implementation.
METHODS
We developed 3 models using TrialNet's Pathway to Prevention data to accommodate the reality that not all risk variables are clinically available. The Small model included AAB status, fasting glucose, HbA1c and age, while the Medium and Large models added predictors of disease progression measured via oral glucose tolerance testing.
FINDINGS
All models markedly improved granularity regarding personalized risk missing from current categories of stages of T1D. Model derived risk calculations are consistent with the expected reduction of risk with increasing age and increase in risk with higher glucose and lower insulin secretion, illustrating the suitability of the models. Adding glucose and insulin secretion data altered model predicted probabilities within Stages. In those with high 2-hour glucose, a high C-peptide markedly decreased predicted risk; lower C-peptide obviated the age-dependent risk of 2-hour glucose alone, providing a more nuanced estimate of rate of disease progression within Stage 2.
CONCLUSIONS
While essentially all those with multiple AABs will develop type 1 diabetes, the rate of progression is heterogeneous and not explained by any individual single risk variable. The model-based probabilities developed here provide an adaptable personalized risk calculator to better inform decisions about how and when to monitor disease progression in clinical practice.
PubMed: 38712386
DOI: 10.1210/clinem/dgae292 -
Health Communication May 2024Grounded in communication models of cultural competence, this study reports on the development and testing of the first module in a larger virtual reality (VR) implicit...
Grounded in communication models of cultural competence, this study reports on the development and testing of the first module in a larger virtual reality (VR) implicit bias training for physicians to help them better: (a) recognize implicit bias and its effects on communication, patients, and patient care; (b) identify their own implicit biases and exercise strategies for managing them; and (c) learn and practice communicating with BIPOC patients in a culture-centered manner that demonstrates respect and builds trust. Led by communication faculty, a large, interdisciplinary team of researchers, clinicians, and engineers developed the first module tested herein focused on training goal (a). Within the module, participants observe five scenes between patient Marilyn Hayes (a Black woman) and Dr. Richard Flynn (her obstetrician, a White man) during a postpartum visit. The interaction contains examples of implicit bias, and participants are asked to both identify and consider how implicit bias impacts communication, the patient, and patient care. The team recruited 30 medical students and resident physicians to participate in a lab-based study that included a pretest, a training experience of the module using a head-mounted VR display, and a posttest. Following the training, participants reported improved attitudes toward implicit bias instruction, greater importance of determining patients' beliefs and perspectives for history-taking, treatment, and providing quality health care; and greater communication efficacy. Participants' agreement with the importance of assessing patients' perspectives, opinions, and psychosocial and cultural contexts did not significantly change. Implications for medical education about cultural competency and implicit bias are discussed.
PubMed: 38711251
DOI: 10.1080/10410236.2024.2347000