-
Journal of Acupuncture and Meridian... Mar 2011Electrodermal activity (EDA) at acupuncture points (acupoints) has been investigated for its utility as a diagnostic aid, a therapeutic monitoring tool, and a... (Review)
Review
Electrodermal activity (EDA) at acupuncture points (acupoints) has been investigated for its utility as a diagnostic aid, a therapeutic monitoring tool, and a physiological outcome measure. The research methodologies reported in published trials, however, vary considerably and publications often lack sufficient details about electrical instrumentation, technical procedures, laboratory conditions, recorded measures, and control comparisons to permit a critical appraisal of the studies or to replicate promising findings. We developed a 10-category (54 subitems) Quality of Reporting scale based on technical issues associated with EDA measurements, publication requirements for reporting EDA in the psychophysiological literature, and recommendations from the CONSORT Statement for reporting clinical trials. Using our Quality of Reporting scale, we extracted data from 29 studies that evaluated EDA at acupoints in patients and generated weighted scores for each of the 10 categories of essential information. Only 9 of the 29 studies reviewed scored a mean of greater than 50% for reporting details of essential information. To rigorously build a program of research on EDA at acupoints we need to standardize research methodology and reporting protocols. We propose a checklist of recommended informational items to report in future clinical trials that record EDA at acupoints.
Topics: Acupuncture Points; Clinical Trials as Topic; Electrodiagnosis; Galvanic Skin Response; Guidelines as Topic; Humans; Monitoring, Physiologic; Outcome Assessment, Health Care; Psychophysiology; Publishing; Research Design
PubMed: 21440875
DOI: 10.1016/S2005-2901(11)60002-2 -
Emergency Medicine Journal : EMJ Jul 2019In this two-part series on sources of bias in studies of diagnostic test performance, we outline common errors and optimal conditions during three study phases: patient...
In this two-part series on sources of bias in studies of diagnostic test performance, we outline common errors and optimal conditions during three study phases: patient selection, interpretation of the index test and disease verification by a gold standard. Here in part 1, biases associated with suboptimal participant selection are discussed through the lens of partial verification bias and spectrum bias, both of which increase the proportion of participants who are the 'sickest of the sick' or the 'wellest of the well.' Especially through retrospective methodology, partial verification introduces bias by including patients who are test positive by a gold standard, since patients with a positive index test are more likely to go on to further gold standard testing. Spectrum bias is frequently introduced through case-control design, dropping of indeterminate results or convenience sampling. After reading part 1, the informed clinician should be better able to judge the quality of a diagnostic test study, its inherent limitations and whether its results could be generalisable to their practice. Part 2 will describe how interpretation of the index test and disease verification by a gold standard can contribute to diagnostic test bias.
Topics: Bias; Diagnostic Tests, Routine; Humans; Patient Selection; Research Design; Retrospective Studies
PubMed: 31302605
DOI: 10.1136/emermed-2019-208446 -
Physics of Life Reviews Sep 2023Psychology and neuroscience are concerned with the study of behavior, of internal cognitive processes, and their neural foundations. However, most laboratory studies use... (Review)
Review
Psychology and neuroscience are concerned with the study of behavior, of internal cognitive processes, and their neural foundations. However, most laboratory studies use constrained experimental settings that greatly limit the range of behaviors that can be expressed. While focusing on restricted settings ensures methodological control, it risks impoverishing the object of study: by restricting behavior, we might miss key aspects of cognitive and neural functions. In this article, we argue that psychology and neuroscience should increasingly adopt innovative experimental designs, measurement methods, analysis techniques and sophisticated computational models to probe rich, ecologically valid forms of behavior, including social behavior. We discuss the challenges of studying rich forms of behavior as well as the novel opportunities offered by state-of-the-art methodologies and new sensing technologies, and we highlight the importance of developing sophisticated formal models. We exemplify our arguments by reviewing some recent streams of research in psychology, neuroscience and other fields (e.g., sports analytics, ethology and robotics) that have addressed rich forms of behavior in a model-based manner. We hope that these "success cases" will encourage psychologists and neuroscientists to extend their toolbox of techniques with sophisticated behavioral models - and to use them to study rich forms of behavior as well as the cognitive and neural processes that they engage.
Topics: Research Design; Social Behavior; Ethology; Neurosciences; Dissent and Disputes
PubMed: 37499620
DOI: 10.1016/j.plrev.2023.07.006 -
Journal of the American Medical... 2012Usability factors are a major obstacle to health information technology (IT) adoption. The purpose of this paper is to review and categorize health IT usability study... (Review)
Review
Usability factors are a major obstacle to health information technology (IT) adoption. The purpose of this paper is to review and categorize health IT usability study methods and to provide practical guidance on health IT usability evaluation. 2025 references were initially retrieved from the Medline database from 2003 to 2009 that evaluated health IT used by clinicians. Titles and abstracts were first reviewed for inclusion. Full-text articles were then examined to identify final eligibility studies. 629 studies were categorized into the five stages of an integrated usability specification and evaluation framework that was based on a usability model and the system development life cycle (SDLC)-associated stages of evaluation. Theoretical and methodological aspects of 319 studies were extracted in greater detail and studies that focused on system validation (SDLC stage 2) were not assessed further. The number of studies by stage was: stage 1, task-based or user-task interaction, n=42; stage 2, system-task interaction, n=310; stage 3, user-task-system interaction, n=69; stage 4, user-task-system-environment interaction, n=54; and stage 5, user-task-system-environment interaction in routine use, n=199. The studies applied a variety of quantitative and qualitative approaches. Methodological issues included lack of theoretical framework/model, lack of details regarding qualitative study approaches, single evaluation focus, environmental factors not evaluated in the early stages, and guideline adherence as the primary outcome for decision support system evaluations. Based on the findings, a three-level stratified view of health IT usability evaluation is proposed and methodological guidance is offered based upon the type of interaction that is of primary interest in the evaluation.
Topics: Humans; Medical Informatics; Models, Theoretical; Research Design; Technology Assessment, Biomedical; User-Computer Interface
PubMed: 21828224
DOI: 10.1136/amiajnl-2010-000020 -
BMJ Open Oct 2016To assess the adequacy of reporting of non-inferiority trials alongside the consistency and utility of current recommended analyses and guidelines. (Review)
Review
OBJECTIVE
To assess the adequacy of reporting of non-inferiority trials alongside the consistency and utility of current recommended analyses and guidelines.
DESIGN
Review of randomised clinical trials that used a non-inferiority design published between January 2010 and May 2015 in medical journals that had an impact factor >10 (JAMA Internal Medicine, Archives Internal Medicine, PLOS Medicine, Annals of Internal Medicine, BMJ, JAMA, Lancet and New England Journal of Medicine).
DATA SOURCES
Ovid (MEDLINE).
METHODS
We searched for non-inferiority trials and assessed the following: choice of non-inferiority margin and justification of margin; power and significance level for sample size; patient population used and how this was defined; any missing data methods used and assumptions declared and any sensitivity analyses used.
RESULTS
A total of 168 trial publications were included. Most trials concluded non-inferiority (132; 79%). The non-inferiority margin was reported for 98% (164), but less than half reported any justification for the margin (77; 46%). While most chose two different analyses (91; 54%) the most common being intention-to-treat (ITT) or modified ITT and per-protocol, a large number of articles only chose to conduct and report one analysis (65; 39%), most commonly the ITT analysis. There was lack of clarity or inconsistency between the type I error rate and corresponding CIs for 73 (43%) articles. Missing data were rarely considered with (99; 59%) not declaring whether imputation techniques were used.
CONCLUSIONS
Reporting and conduct of non-inferiority trials is inconsistent and does not follow the recommendations in available statistical guidelines, which are not wholly consistent themselves. Authors should clearly describe the methods used and provide clear descriptions of and justifications for their design and primary analysis. Failure to do this risks misleading conclusions being drawn, with consequent effects on clinical practice.
Topics: Bias; Biomedical Research; Guidelines as Topic; Humans; Journal Impact Factor; Patient Selection; Periodicals as Topic; Publishing; Research Design; Sample Size; Statistics as Topic
PubMed: 27855102
DOI: 10.1136/bmjopen-2016-012594 -
Medical Archives (Sarajevo, Bosnia and... Oct 2019Inappropriate design of experimental studies in medicine inevitably leads to inaccurate or false results, which serve as basis for erroneous and biased conclusions. (Review)
Review
INTRODUCTION
Inappropriate design of experimental studies in medicine inevitably leads to inaccurate or false results, which serve as basis for erroneous and biased conclusions.
AIM
The aim of our study was to investigate prevalence of implementing basic principles of experimental design (local control, replication and randomization) in preclinical experimental studies, performed either on animals in vivo, or animal/human material in vitro.
MATERIAL AND METHODS
Preclinical experimental studies were retrieved from the PubMed database, and the sample for analysis was randomly chosen from the retrieved publications. Implementation rate of basic experimental research principles (local control, randomization and replication) was established by careful reading of the sampled publications and their checking against predefined criteria.
RESULTS
Our study showed that only a minority of experimental preclinical studies had basic principles of design completely implemented (7%), while implementation rate of single aspects of appropriate experimental design varied from as low as 9% to maximum 86%. Average impact factor of the surveyed studies was high, and publication date relatively recent, suggesting generalizability of our results to highly ranked contemporary journals.
CONCLUSION
Prevalence of experimental preclinical studies that did not implement completely basic principles of research design is high, raising suspicion to validity of their results. If incorrect and biased, results of published studies may mislead authors of future studies and cause conduction of fruitless research that will waste precious resources.
Topics: Animals; Biomedical Research; Control Groups; Humans; In Vitro Techniques; Random Allocation; Reproducibility of Results; Research Design
PubMed: 31819300
DOI: 10.5455/medarh.2019.73.298-302 -
Biostatistics (Oxford, England) Apr 2023Multiregional clinical trials (MRCTs) provide the benefit of more rapidly introducing drugs to the global market; however, small regional sample sizes can lead to poor...
Multiregional clinical trials (MRCTs) provide the benefit of more rapidly introducing drugs to the global market; however, small regional sample sizes can lead to poor estimation quality of region-specific effects when using current statistical methods. With the publication of the International Conference for Harmonisation E17 guideline in 2017, the MRCT design is recognized as a viable strategy that can be accepted by regional regulatory authorities, necessitating new statistical methods that improve the quality of region-specific inference. In this article, we develop a novel methodology for estimating region-specific and global treatment effects for MRCTs using Bayesian model averaging. This approach can be used for trials that compare two treatment groups with respect to a continuous outcome, and it allows for the incorporation of patient characteristics through the inclusion of covariates. We propose an approach that uses posterior model probabilities to quantify evidence in favor of consistency of treatment effects across all regions, and this metric can be used by regulatory authorities for drug approval. We show through simulations that the proposed modeling approach results in lower MSE than a fixed-effects linear regression model and better control of type I error rates than a Bayesian hierarchical model.
Topics: Humans; Bayes Theorem; Treatment Outcome; Sample Size; Drug Approval; Probability; Research Design
PubMed: 34296263
DOI: 10.1093/biostatistics/kxab027 -
Journal of Biomedical Informatics May 2022In the present systematic review we identified and summarised current research activities in the field of time series forecasting and imputation with the help of... (Review)
Review
In the present systematic review we identified and summarised current research activities in the field of time series forecasting and imputation with the help of generative adversarial networks (GANs). We differentiate between imputation which describes the filling of missing values at intermediate steps and forecasting defining the prediction of future values. Especially the utilisation of such methods in the biomedical domain was to be investigated. To this end, 1057 publications were identified with the help of PubMed, Web of Science and Scopus. All studies that describe the use of GANs for the imputation/forecasting of time series were included irrespective of the application domain. Finally, 33 records were identified as eligible and grouped according to the topologies, losses, inputs and outputs of the presented GANs. In combination with a summary of all described application domains, this grouping served as a basis for analysing the peculiarities of the method in the biomedical context. Due to the broad spectrum of biomedical research, nearly all recognised methodologies are also applied in this domain. We could not identify any approach that proved itself superior in the biomedical area. Although GANs were initially designed to work in the image domain, many publications show that they are capable of imputing/forecasting non-visual time series.
Topics: Bibliometrics; Forecasting; Neural Networks, Computer; Research Design; Time Factors
PubMed: 35346855
DOI: 10.1016/j.jbi.2022.104058 -
Clinical Trials (London, England) Dec 2018Recruiting the target number of participants within the pre-specified time frame agreed with funders remains a common challenge in the completion of a successful... (Review)
Review
BACKGROUND
Recruiting the target number of participants within the pre-specified time frame agreed with funders remains a common challenge in the completion of a successful clinical trial and addressing this is an important methodological priority. While there is growing research around recruitment, navigating this literature to support an evidence-based approach remains difficult. The Online resource for Recruitment Research in Clinical triAls project aims to create an online searchable database of recruitment research to improve access to existing evidence and to identify gaps for future research.
METHODS
MEDLINE (Ovid), Scopus, Cochrane Database of Systematic Reviews and Cochrane Methodology Register, Science Citation Index Expanded and Social Sciences Citation Index within the ISI Web of Science and Education Resources Information Center were searched in January 2015. Search strategy results were screened by title and abstract, and full text obtained for potentially eligible articles. Studies reporting or evaluating strategies, interventions or methods used to recruit patients were included along with case reports and studies exploring reasons for patient participation or non-participation. Eligible articles were categorised as systematic reviews, nested randomised controlled trials and other designs evaluating the effects of recruitment strategies (Level 1); studies that report the use of recruitment strategies without an evaluation of impact (Level 2); or articles reporting factors affecting recruitment without presenting a particular recruitment strategy (Level 3). Articles were also assigned to 1, or more, of 42 predefined recruitment domains grouped under 6 categories.
RESULTS
More than 60,000 records were retrieved by the search, resulting in 56,030 unique titles and abstracts for screening, with a further 23 found through hand searches. A total of 4570 full text articles were checked; 2804 were eligible. Six percent of the included articles evaluated the effectiveness of a recruitment strategy (Level 1), with most of these assessing aspects of participant information, either its method of delivery (33%) or its content and format (28%).
DISCUSSION
Recruitment to clinical trials remains a common challenge and an important area for future research. The online resource for Recruitment Research in Clinical triAls project provides a searchable, online database of research relevant to recruitment. The project has identified the need for researchers to evaluate their recruitment strategies to improve the evidence base and broaden the narrow focus of existing research to help meet the complex challenges faced by those recruiting to clinical trials.
Topics: Biomedical Research; Databases as Topic; Humans; Patient Selection; Sample Size
PubMed: 30165760
DOI: 10.1177/1740774518796156 -
Clinical Trials (London, England) Oct 2022Adaptive platform trials allow randomized controlled comparisons of multiple treatments using a common infrastructure and the flexibility to adapt key design features... (Meta-Analysis)
Meta-Analysis
BACKGROUND
Adaptive platform trials allow randomized controlled comparisons of multiple treatments using a common infrastructure and the flexibility to adapt key design features during the study. Nonetheless, they have been criticized due to the potential for time trends in the underlying risk level of the population. Such time trends lead to confounding between design features and risk level, which may introduce bias favoring one or more treatments. This is particularly true when experimental treatments are not all randomized during the same time period as the control, leading to the potential for bias from non-concurrent controls.
METHODS
Two analysis methods addressing this bias are stratification and adjustment. Stratification uses only comparisons between treatment cohorts randomized during identical time periods and does not use non-concurrent randomizations. Adjustment uses a modeled analysis including time period adjustment, allowing all data to be used, even from periods without concurrent randomization. We show that these competing approaches may be embedded in a common framework using network meta-analysis principles. We interpret the stages between adaptations in a platform trial as separate fixed design trials. This allows platform trials to be viewed as networks of direct randomized comparisons and indirect non-randomized comparisons. Network meta-analysis methodology can be re-purposed to aggregate the total information from a platform trial and to transparently decompose this total information into direct randomized evidence and indirect non-randomized evidence. This allows sensitivity to indirect information to be assessed and the two analysis methods to be clearly compared.
RESULTS
Simulations of platform trials were analyzed using a network approach implemented in the netmeta package in R. The results demonstrated bias of unadjusted methods in the presence of time trends in risk level. Adjustment and stratification were both unbiased when direct evidence and indirect evidence were consistent. Network tests of inconsistency may be used to diagnose inconsistency when it exists. In an illustrative network analysis of one of the treatment comparisons from the STAMPEDE platform trial in metastatic prostate cancer, indirect comparisons using non-concurrent controls were inconsistent with the information from direct randomized comparisons. This supports the primary analysis approach of STAMPEDE, which used only direct randomized comparisons.
CONCLUSION
Network meta-analysis provides a natural methodology for analyzing the network of direct and indirect treatment comparisons from a platform trial. Such analyses provide transparent separation of direct and indirect evidence, allowing assessment of the impact of non-concurrent controls. We recommend time-stratified analysis of concurrently controlled comparisons for primary analyses, with time-adjusted analyses incorporating non-concurrent controls reserved for secondary analyses. However, regardless of which methodology is used, a network analysis provides a useful supplement to the primary analysis.
Topics: Bias; Humans; Male; Network Meta-Analysis; Randomized Controlled Trials as Topic; Research Design
PubMed: 35993542
DOI: 10.1177/17407745221112001