-
EURASIP Journal on Bioinformatics &... Dec 2016Event-related potentials (ERPs) are widely used in brain-computer interface applications and in neuroscience. Normal EEG activity is rich in background noise, and...
Event-related potentials (ERPs) are widely used in brain-computer interface applications and in neuroscience. Normal EEG activity is rich in background noise, and therefore, in order to detect ERPs, it is usually necessary to take the average from multiple trials to reduce the effects of this noise. The noise produced by EEG activity itself is not correlated with the ERP waveform and so, by calculating the average, the noise is decreased by a factor inversely proportional to the square root of , where is the number of averaged epochs. This is the easiest strategy currently used to detect ERPs, which is based on calculating the average of all ERP's waveform, these waveforms being time- and phase-locked. In this paper, a new method called GW6 is proposed, which calculates the ERP using a mathematical method based only on Pearson's correlation. The result is a graph with the same time resolution as the classical ERP and which shows only positive peaks representing the increase-in consonance with the stimuli-in EEG signal correlation over all channels. This new method is also useful for selectively identifying and highlighting some hidden components of the ERP response that are not phase-locked, and that are usually hidden in the standard and simple method based on the averaging of all the epochs. These hidden components seem to be caused by variations (between each successive stimulus) of the ERP's inherent phase latency period (jitter), although the same stimulus across all EEG channels produces a reasonably constant phase. For this reason, this new method could be very helpful to investigate these hidden components of the ERP response and to develop applications for scientific and medical purposes. Moreover, this new method is more resistant to EEG artifacts than the standard calculations of the average and could be very useful in research and neurology. The method we are proposing can be directly used in the form of a process written in the well-known Matlab programming language and can be easily and quickly written in any other software language.
PubMed: 27335578
DOI: 10.1186/s13637-016-0043-z -
Diagnostics (Basel, Switzerland) Jan 2024Continuous Thermodilution is a novel method of quantifying coronary flow (Q) in mL/min. To account for variability of Q within the cardiac cycle, the trace is smoothened...
Continuous Thermodilution is a novel method of quantifying coronary flow (Q) in mL/min. To account for variability of Q within the cardiac cycle, the trace is smoothened with a 2 s moving average filter. This can sometimes be ineffective due to significant heart rate variability, ventricular extrasystoles, and deep inspiration, resulting in a fluctuating temperature trace and ambiguity in the location of the "steady state". This study aims to assess whether a longer moving average filter would smoothen any fluctuations within the continuous thermodilution traces resulting in improved interpretability and reproducibility on a test-retest basis. Patients with ANOCA underwent repeat continuous thermodilution measurements. Analysis of traces were performed at averages of 10, 15, and 20 s to determine the maximum acceptable average. The maximum acceptable average was subsequently applied as a moving average filter and the traces were re-analysed to assess the practical consequences of a longer moving average. Reproducibility was then assessed and compared to a 2 s moving average. Of the averages tested, only 10 s met the criteria for acceptance. When the data was reanalysed with a 10 s moving average filter, there was no significant improvement in reproducibility, however, it resulted in a 12% diagnostic mismatch. Applying a longer moving average filter to continuous thermodilution data does not improve reproducibility. Furthermore, it results in a loss of fidelity on the traces, and a 12% diagnostic mismatch. Overall, current practice should be maintained.
PubMed: 38337801
DOI: 10.3390/diagnostics14030285 -
Journal of Dairy Science Feb 2021Single-step genomic BLUP (ssGBLUP) requires compatibility between genomic and pedigree relationships for unbiased and accurate predictions. Scaling the genomic...
Single-step genomic BLUP (ssGBLUP) requires compatibility between genomic and pedigree relationships for unbiased and accurate predictions. Scaling the genomic relationship matrix (G) to have the same averages as the pedigree relationship matrix (i.e., scaling by averages) is one way to ensure compatibility. This requires computing both relationship matrices, calculating averages, and changing G, whereas only the inverses of those matrices are needed in the mixed model equations. Therefore, the compatibility process can add extra computing burden. In the single-step Bayesian regression, the scaling is done by including a mean (μ) as a fixed effect in the model. The parameter μ can be interpreted as the average of the breeding values of the genotyped animals. In this study, such scaling, called automatic, was implemented in ssGBLUP via Quaas-Pollak transformation of the inverse of the relationship matrix used in ssGBLUP (H), which combines the inverses of the pedigree and genomic relationship matrices. Comparisons involved a simulated data set, and the genomic relationship matrix was computed using different allele frequencies either from the current population (i.e., realized allele frequencies), equal among all the loci, or from the base population. For all of the scenarios, we computed bias [defined as the average difference between true breeding values (TBV) and genomic estimated breeding values (GEBV)], accuracy (defined as the correlation between TBV and GEBV), and dispersion (defined as the regression coefficient of GEBV on TBV). With no scaling, the bias expressed in terms of genetic standard deviations was 0.86, 0.64, and 0.58 with realized, equal, and base population allele frequencies, respectively. With scaling by averages, which is currently used in ssGBLUP, bias was 0.07, 0.08, and 0.03, respectively. With automatic scaling, bias was 0.18 regardless of allele frequencies. Accuracies were similar among scaling methods, but about 0.1 lower in the scenario without scaling. The GEBV were more inflated without any scaling, whereas the automatic scaling performed similarly to the scaling by averages. The average dispersion for those methods was 0.94. When μ was treated as random, with the variance equal to differences between pedigree and genomic relationships, the bias was the same as with the scaling by averages. The automatic scaling is biased, especially when μ is treated as a fixed effect. The bias may be small in real data with fewer generations, when traits are undergoing weak selection, or when the number of genotyped animals is large.
Topics: Animals; Bayes Theorem; Gene Frequency; Genome; Genomics; Genotype; Models, Genetic; Models, Statistical; Pedigree; Phenotype
PubMed: 33309381
DOI: 10.3168/jds.2020-18969 -
Magnetic Resonance in Medicine Jun 2023To develop a motion-robust reconstruction technique for free-breathing cine imaging with multiple averages.
PURPOSE
To develop a motion-robust reconstruction technique for free-breathing cine imaging with multiple averages.
METHOD
Retrospective motion correction through multiple average k-space data elimination (REMAKE) was developed using iterative removal of k-space segments (from individual k-space samples) that contribute most to motion corruption while combining any remaining segments across multiple signal averages. A variant of REMAKE, termed REMAKE+, was developed to address any losses in SNR due to k-space information removal. With REMAKE+, multiple reconstructions using different initial conditions were performed, co-registered, and averaged. Both techniques were validated against clinical "standard" signal averaging reconstruction in a static phantom (with simulated motion) and 15 patients undergoing free-breathing cine imaging with multiple averages. Quantitative analysis of myocardial sharpness, blood/myocardial SNR, myocardial-blood contrast-to-noise ratio (CNR), as well as subjective assessment of image quality and rate of diagnostic quality images were performed.
RESULTS
In phantom, motion artifacts using "standard" (RMS error [RMSE]: 2.2 ± 0.5) were substantially reduced using REMAKE/REMAKE+ (RMSE: 1.5 ± 0.4/1.0 ± 0.4, p < 0.01). In patients, REMAKE/REMAKE+ led to higher myocardial sharpness (0.79 ± 0.09/0.79 ± 0.1 vs. 0.74 ± 0.12 for "standard", p = 0.004/0.04), higher image quality (1.8 ± 0.2/1.9 ± 0.2 vs. 1.6 ± 0.4 for "standard", p = 0.02/0.008), and a higher rate of diagnostic quality images (99%/100% vs. 94% for "standard"). Blood/myocardial SNR for "standard" (94 ± 30/33 ± 10) was higher vs. REMAKE (80 ± 25/28 ± 8, p = 0.002/0.005) and tended to be lower vs. REMAKE+ (105 ± 33/36 ± 12, p = 0.02/0.06). Myocardial-blood CNR for "standard" (61 ± 22) was higher vs. REMAKE (53 ± 19, p = 0.003) and lower vs. REMAKE+ (69 ± 24, p = 0.007).
CONCLUSIONS
Compared to "standard" signal averaging reconstruction, REMAKE and REMAKE+ provide improved myocardial sharpness, image quality, and rate of diagnostic quality images.
Topics: Humans; Magnetic Resonance Imaging, Cine; Retrospective Studies; Heart; Respiration; Motion; Artifacts
PubMed: 36763898
DOI: 10.1002/mrm.29613 -
Catheterization and Cardiovascular... Feb 2022We evaluated the occurrence and physiology of respiration-related beat-to-beat variations in resting Pd/Pa and FFR during intravenous adenosine administration, and its...
AIMS
We evaluated the occurrence and physiology of respiration-related beat-to-beat variations in resting Pd/Pa and FFR during intravenous adenosine administration, and its impact on clinical decision-making.
METHODS AND RESULTS
Coronary pressure tracings in rest and at plateau hyperemia were analyzed in a total of 39 stenosis from 37 patients, and respiratory rate was calculated with ECG-derived respiration (EDR) in 26 stenoses from 26 patients. Beat-to-beat variations in FFR occurred in a cyclical fashion and were strongly correlated with respiratory rate (R = 0.757, p < 0.001). There was no correlation between respiratory rate and variations in resting Pd/Pa. When single-beat averages were used to calculate FFR, mean ΔFFR was 0.04 ± 0.02. With averaging of FFR over three or five cardiac cycles, mean ΔFFR decreased to 0.02 ± 0.02, and 0.01 ± 0.01, respectively. Using a FFR ≤ 0.80 threshold, stenosis classification changed in 20.5% (8/39), 12.8% (5/39) and 5.1% (2/39) for single-beat, three-beat and five-beat averaged FFR. The impact of respiration was more pronounced in patients with pulmonary disease (ΔFFR 0.05 ± 0.02 vs 0.03 ± 0.02, p = 0.021).
CONCLUSION
Beat-to-beat variations in FFR during plateau hyperemia related to respiration are common, of clinically relevant magnitude, and frequently lead FFR to cross treatment thresholds. A five-beat averaged FFR, overcomes clinically relevant impact of FFR variation.
Topics: Adenosine; Cardiac Catheterization; Coronary Angiography; Coronary Stenosis; Coronary Vessels; Fractional Flow Reserve, Myocardial; Humans; Hyperemia; Predictive Value of Tests; Respiration; Severity of Illness Index; Treatment Outcome; Vasodilator Agents
PubMed: 34766734
DOI: 10.1002/ccd.30012 -
Journal of Vision Jan 2023Many studies have shown that observers can accurately estimate the average feature of a group of objects. However, the way the visual system relies on the information...
Many studies have shown that observers can accurately estimate the average feature of a group of objects. However, the way the visual system relies on the information from each individual item is still under debate. Some models suggest some or all items sampled and averaged arithmetically. Another strategy implies "robust averaging," when middle elements gain greater weight than outliers. One version of a robust averaging model was recently suggested by Teng et al. (2021), who studied motion direction averaging in skewed feature distributions and found systematic biases toward their modes. They interpreted these biases as evidence for robust averaging and suggested a probabilistic weighting model based on minimization of the virtual loss function. In four experiments, we replicated systematic skew-related biases in another feature domain, namely, orientation averaging. Importantly, we show that the magnitude of the bias is not determined by the locations of the mean or mode alone, but is substantially defined by the shape of the whole feature distribution. We test a model that accounts for such distribution-dependent biases and robust averaging in a biologically plausible way. The model is based on well-established mechanisms of spatial pooling and population encoding of local features by neurons with large receptive fields. Both the loss functions model and the population coding model with a winner-take-all decoding rule accurately predicted the observed patterns, suggesting that the pooled population response model can be considered a neural implementation of the computational algorithms of information sampling and robust averaging in ensemble perception.
Topics: Humans; Motion Perception; Neurons; Motion
PubMed: 36602815
DOI: 10.1167/jov.23.1.5 -
Frontiers in Public Health 2021Coronavirus disease 2019 (COVID-19) is a form of disease triggered by a new strain of coronavirus. This paper proposes a novel model termed "deep fractional max pooling...
Coronavirus disease 2019 (COVID-19) is a form of disease triggered by a new strain of coronavirus. This paper proposes a novel model termed "deep fractional max pooling neural network (DFMPNN)" to diagnose COVID-19 more efficiently. This 12-layer DFMPNN replaces max pooling (MP) and average pooling (AP) in ordinary neural networks with the help of a novel pooling method called "fractional max-pooling" (FMP). In addition, multiple-way data augmentation (DA) is employed to reduce overfitting. Model averaging (MA) is used to reduce randomness. We ran our algorithm on a four-category dataset that contained COVID-19, community-acquired pneumonia, secondary pulmonary tuberculosis (SPT), and healthy control (HC). The 10 runs on the test set show that the micro-averaged F1 (MAF) score of our DFMPNN is 95.88%. This proposed DFMPNN is superior to 10 state-of-the-art models. Besides, FMP outperforms traditional MP, AP, and L2-norm pooling (L2P).
Topics: Algorithms; COVID-19; Humans; Neural Networks, Computer; Pneumonia; SARS-CoV-2
PubMed: 34447739
DOI: 10.3389/fpubh.2021.726144 -
Cureus May 2021Background The Connecticut Orthopaedic Institute (COI) conceptualized a Pivot Plan during an elective surgery moratorium at the beginning of the severe acute respiratory...
Background The Connecticut Orthopaedic Institute (COI) conceptualized a Pivot Plan during an elective surgery moratorium at the beginning of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic with the goal of planning and executing orthopedic procedures safely. With the resumption of elective surgeries and the continued planning of surgical recovery over the months (and possibly years) to follow, facilities must brace themselves for repeat waves of COVID-19. Thereby, herein we share the Pivot Plan, its implementation process, evaluation of patient safety, and program performance during a pandemic. This could inform the efforts of other institutions seeking to restart non-emergent surgeries during similarly trying times in the future. Methods The COI formed a multidisciplinary team of leaders that met weekly to design a Pivot Plan and a dashboard to guide the resumption of surgeries and assess the performance of the Pivot Plan. The plan revolved around four domains: safety, space, staff, and supplies. It was implemented in two COI-affiliated facilities: MidState Medical Center (MMC) and St. Vincent's Medical Center (SVMC). Monthly metrics from May to November 2020 were compared to the six-month averages for the pre-pandemic baseline period from September 2019 to February 2020. Results The total number (N) of elective orthopaedic cases prior to the pandemic pre-COVID averaged 372 cases per month for MMC and 197 cases for SVMC. During the pandemic post-COVID, N averaging at 361 for MMC and 243 for SVMC illustrates COI was able to perform elective surgeries amid a worsening pandemic. Same-day (SD) discharge rates for total joint arthroplasty (TJA) pre-COVID averaged 8% for MMC and 3% for SVMC. Post-COVID, the SD average was 16.7% for MMC and 11.4% for SVMC. This data indicates that orthopaedic providers were cognizant of length of stay in order to reduce the risk of in-hospital exposure to COVID-19. The 30-day readmission (30R) rate for TJA pre-COVID averaged 1.4% for MMC and 2.7% for SVMC. A high level of care and follow-up is reflected in a lower average 30R post-COVID, 1.1% for both MMC and SVMC. Transitions for TJA patients to their home settings after surgery also reflect the quality of care and the efficiency of the patient throughput process with necessary precautions in place. Post-COVID, the patient transition to home (T) averaged 98.1% for MMC and 97.5% for SVMC compared to T = 96.8% for MMC and 88% for SVMC pre-COVID. No patients experienced deep vein thrombosis or pulmonary embolism during the time period of the project. Positive COVID-19 diagnosis 23 days after discharge was 0% at MMC and 0.2% at SVMC. Conclusion The COI Pivot Plan was successfully implemented at two different hospitals offering elective orthopaedic surgeries to a varied patient population. The precautions taken by COI were effective in controlling the spread of the SARS-CoV-2 virus while returning to elective orthopaedic surgery. Furthermore, data collected before and after the onset of the COVID-19 pandemic indicated that program performance and quality improved.
PubMed: 34150411
DOI: 10.7759/cureus.15077 -
Journal of Dairy Science Jan 2019Spray strategies (e.g., flow rate and spray timing) may affect the surrounding microclimate and how cows use soakers, affecting cooling efficiency. Our objective was to...
Spray strategies (e.g., flow rate and spray timing) may affect the surrounding microclimate and how cows use soakers, affecting cooling efficiency. Our objective was to evaluate the combined effects of spray timing (i.e., frequency, low: 3 min on, 6 min off; or high: 1.5 min on, 3 min off) and flow rates (3.3 or 4.9 L/min) on behavioral and physiological responses to heat load and production in Holstein cows managed in a freestall barn. In a 2 × 2 Latin square design, 3 cohorts of 4 pairs of cows averaging (±standard deviation) 36.7 ± 5.4 kg/d of milk were tested for 3 d/treatment. Water was sprayed at the feedline from 0815 to 2330 h when air temperature and relative humidity averaged 27 ± 3°C and 37 ± 7%, respectively. The overall quantity of water sprayed was not affected by spray timing; it varied only as a function of flow rate. Cows' posture and location within the pen were measured continuously, whereas feeding and body temperature were recorded every 3 min over 24 h/d. Respiration rates were recorded daily every 45 min from 0900 to 2000 h. Neither spray timing nor flow rates affected posture, location in the pen, feeding activity, or respiration rates. Overall, on average, cows spent 12.6 ± 0.4 h/d lying down and 5.8 ± 0.3 h/d in the feed bunk area. While in the feed bunk area, cows spent 78 ± 3% of their time feeding. Average respiration rate ranged from 57 to 59 ± 3 breaths/min across treatments. Although body temperature tended to be reduced when using higher flow rate, this difference was 0.1°C when comparing 24-h averages (4.9 vs. 3.3 L/min: 38.6 vs. 38.7 ± 0.1°C). Body temperature differences, however, were more marked and statistically different when soakers were cycling, especially between 1100 and 2200 h. Despite this, the magnitude of the hourly differences were <0.2°C. Milk production also tended to increase by 1.5 kg/d when using higher flow rates. When using the same water volume, spray timing did not affect cow behavior, physiology, or production. Flow rate had a small effect on milk production and body temperature but the biological relevance of these differences is unclear, especially in this situation where all cows were relatively cool.
Topics: Animal Husbandry; Animals; Behavior, Animal; Body Temperature; Cattle; Economics; Female; Hot Temperature; Male; Milk; Respiratory Rate; Time Factors
PubMed: 30343920
DOI: 10.3168/jds.2018-14962 -
International Journal of Environmental... Jan 2020Low-cost, portable particle sensors (n = 3) were designed, constructed, and used to monitor human exposure to particle pollution at various locations and times in...
Low-cost, portable particle sensors (n = 3) were designed, constructed, and used to monitor human exposure to particle pollution at various locations and times in Lubbock, TX. The air sensors consisted of a Sharp GP2Y1010AU0F dust sensor interfaced to an Arduino Uno R3, and a FONA808 3G communications module. The Arduino Uno was used to receive the signal from calibrated dust sensors to provide a concentration (µg/m) of suspended particulate matter and coordinate wireless transmission of data via the 3G cellular network. Prior to use for monitoring, dust sensors were calibrated against a reference aerosol monitor (RAM-1) operating independently. Sodium chloride particles were generated inside of a 3.6 m mixing chamber while the RAM-1 and each dust sensor recorded signals and calibration was achieved for each dust sensor independently of others by direct comparison with the RAM-1 reading. In an effort to improve the quality of the data stream, the effect of averaging replicate individual pulses of the Sharp sensor when analyzing zero air has been studied. Averaging data points exponentially reduces standard deviation for all sensors with n < 2000 averages but averaging produced diminishing returns after approx. 2000 averages. The sensors exhibited standard deviations for replicate measurements of 3-6 µg/m and corresponding 3 detection limits of 9-18 µg/m when 2000 pulses of the dust sensor LED were averaged over an approx. 2 minute data collection/transmission cycle. To demonstrate portable monitoring, concentration values from the dust sensors were sent wirelessly in real time to a channel, while tracking the sensor's latitude and longitude using an on-board Global Positioning System (GPS) sensor. Outdoor and indoor air quality measurements were made at different places and times while human volunteers carried sensors. The measurements indicated walking by restaurants and cooking at home increased the exposure to particulate matter. The construction of the dust sensors and data collected from this research enhance the current research by describing an open-source concept and providing initial measurements. In principle, sensors can be massively multiplexed and used to generate real-time maps of particulate matter around a given location.
Topics: Air Pollutants; Dust; Environmental Exposure; Environmental Monitoring; Humans; Particle Size; Particulate Matter; Texas; Wireless Technology
PubMed: 32013139
DOI: 10.3390/ijerph17030843