-
Acta Crystallographica. Section D,... May 2018Macromolecular crystallography is now a mature and widely used technique that is essential in the understanding of biology and medicine. Increases in computing power...
Macromolecular crystallography is now a mature and widely used technique that is essential in the understanding of biology and medicine. Increases in computing power combined with robotics have not only enabled large numbers of samples to be screened and characterized but have also enabled better decisions to be taken on data collection itself. This led to the development of MASSIF-1 at the ESRF, the first beamline in the world to run fully automatically while making intelligent decisions taking user requirements into account. Since opening in late 2014, the beamline has processed over 42 000 samples. Improvements have been made to the speed of the sample-handling robotics and error management within the software routines. The workflows initially put into place, while highly innovative at the time, have been expanded to include increased complexity and additional intelligence using the information gathered during characterization; this includes adapting the beam diameter dynamically to match the diffraction volume within the crystal. Complex multi-position and multi-crystal data collections have now also been integrated into the selection of experiments available. This has led to increased data quality and throughput, allowing even the most challenging samples to be treated automatically.
Topics: Algorithms; Crystallography, X-Ray; Data Collection; Macromolecular Substances; Receptors, G-Protein-Coupled; Specimen Handling; Synchrotrons; Time Factors; Workflow
PubMed: 29717714
DOI: 10.1107/S2059798318003728 -
Value in Health : the Journal of the... Oct 2021This study aimed to evaluate the uncertainty related to the use of common collection tools to assess costs in economic evaluations compared with an exhaustive...
Estimation of the Width of Uncertainty in Care Consumption and Costs When Using Common Data Collection Tools in Economic Evaluations: A Benchmark for Sensitivity Analyses.
OBJECTIVES
This study aimed to evaluate the uncertainty related to the use of common collection tools to assess costs in economic evaluations compared with an exhaustive administrative database.
METHODS
A pragmatic study was performed using preexisting cost-effectiveness studies. Patients were probabilistically matched with themselves in the French National Health Data System (Système National des Données de Santé [SNDS]), and all their reimbursed hospital and ambulatory care data during the study were extracted. Outcomes included the ratio of the number of each type of resources consumed using trial data (case report forms for ambulatory care and local hospital data for hospital care) versus the SNDS and the ratio of corresponding costs. Mean ratios and 95% confidence intervals (CIs) were calculated using bootstrapping. The impact of the collection tool on the result of the economic evaluation was calculated with the difference in costs between the 2 treatment arms with both collection methods.
RESULTS
Five cost-effectiveness studies were included in the analysis. A total of 397 patients had the SNDS hospital data, and 321 had ambulatory care data. Common collection tools underestimated hospital admissions by 13% (95% CI 8-20), corresponding costs by 5% (95% CI 2-14), and ambulatory acts by 41% (95% CI 33-51), with large variations in costs depending on the study. There was no change in the economic conclusion in any study.
CONCLUSIONS
The use of common collection tools underestimates healthcare resource consumption and its associated costs, particularly for ambulatory care. Our results could provide useful evidence-based estimates to inform sensitivity analyses' parameters in future cost-effectiveness analyses.
Topics: Benchmarking; Cost-Benefit Analysis; Data Collection; France; Humans; Pragmatic Clinical Trials as Topic; Statistics, Nonparametric; Uncertainty
PubMed: 34593164
DOI: 10.1016/j.jval.2021.05.004 -
Informatics in Primary Care 2014Although the collection of patient ethnicity data is a requirement of publicly funded healthcare providers in the UK, recording of ethnicity is sub-optimal for reasons...
BACKGROUND
Although the collection of patient ethnicity data is a requirement of publicly funded healthcare providers in the UK, recording of ethnicity is sub-optimal for reasons that remain poorly understood.
AIMS AND OBJECTIVES
We sought to understand enablers and barriers to the collection and utilisation of ethnicity data within electronic health records, how these practices have developed and what benefit this information provides to different stakeholder groups.
METHODS
We undertook an in-depth, qualitative case study drawing on interviews and documents obtained from participants working as academics, managers and administrators within the UK.
RESULTS
Information regarding patient ethnicity was collected and coded as administrative patient data, and/or in narrative form within clinical records. We identified disparities in the classification of ethnicity, approaches to coding and levels of completeness due to differing local, regional and national policies and processes. Most participants could not identify any clinical value of ethnicity information and many did not know if and when data were shared between services or used to support quality of care and research.
CONCLUSIONS
Findings highlighted substantial variations in data classification, and practical challenges in data collection and usage that undermine the integrity of data collected. Future work needs to focus on explaining the uses of these data to frontline clinicians, identifying resources that can support busy professionals to collect standardised data and then, once collected, maximising the utility of these data.
Topics: Data Collection; Ethnicity; Female; General Practice; Health Services Research; Hospital Administration; Humans; Information Storage and Retrieval; Interviews as Topic; Male; Qualitative Research; Socioeconomic Factors; United Kingdom
PubMed: 25207615
DOI: 10.14236/jhi.v21i3.63 -
Progress in Community Health... 2016Community-engaged data collection offers an important opportunity to build community capacity to harness the power of data and create social change.
BACKGROUND
Community-engaged data collection offers an important opportunity to build community capacity to harness the power of data and create social change.
OBJECTIVES
To share lessons learned from engaging 16 adolescents and young adults from a partner community to collect data for a public opinion survey as part of a broader community-based participatory research (CBPR) project.
METHODS
We conducted an analysis of archival documents, process data, and an assessment of survey assistants' experiences.
LESSONS LEARNED
High-quality data were collected from a hard-to-reach population. Survey assistants benefited from exposure to research and gained professional skills. Key challenges included conducting surveys in challenging environments and managing schedule constraints during the school year. The tremendous investment made by project partners was vital for success.
CONCLUSIONS
Investments required to support engaged data collection were larger than anticipated, as were the rewards, prompting greater attention to the integration of adolescents and young adults in research efforts.
Topics: Adolescent; Capacity Building; Community-Based Participatory Research; Data Collection; Female; Health Surveys; Humans; Male; Massachusetts; Young Adult
PubMed: 27346767
DOI: 10.1353/cpr.2016.0027 -
Population Health Metrics Feb 2021Electronic data collection is increasingly used for household surveys, but factors influencing design and implementation have not been widely studied. The Every...
BACKGROUND
Electronic data collection is increasingly used for household surveys, but factors influencing design and implementation have not been widely studied. The Every Newborn-INDEPTH (EN-INDEPTH) study was a multi-site survey using electronic data collection in five INDEPTH health and demographic surveillance system sites.
METHODS
We described experiences and learning involved in the design and implementation of the EN-INDEPTH survey, and undertook six focus group discussions with field and research team to explore their experiences. Thematic analyses were conducted in NVivo12 using an iterative process guided by a priori themes.
RESULTS
Five steps of the process of selecting, adapting and implementing electronic data collection in the EN-INDEPTH study are described. Firstly, we reviewed possible electronic data collection platforms, and selected the World Bank's Survey Solutions® as the most suited for the EN-INDEPTH study. Secondly, the survey questionnaire was coded and translated into local languages, and further context-specific adaptations were made. Thirdly, data collectors were selected and trained using standardised manual. Training varied between 4.5 and 10 days. Fourthly, instruments were piloted in the field and the questionnaires finalised. During data collection, data collectors appreciated the built-in skip patterns and error messages. Internet connection unreliability was a challenge, especially for data synchronisation. For the fifth and final step, data management and analyses, it was considered that data quality was higher and less time was spent on data cleaning. The possibility to use paradata to analyse survey timing and corrections was valued. Synchronisation and data transfer should be given special consideration.
CONCLUSION
We synthesised experiences using electronic data collection in a multi-site household survey, including perceived advantages and challenges. Our recommendations for others considering electronic data collection include ensuring adaptations of tools to local context, piloting/refining the questionnaire in one site first, buying power banks to mitigate against power interruption and paying attention to issues such as GPS tracking and synchronisation, particularly in settings with poor internet connectivity.
Topics: Data Accuracy; Electronics; Humans; Infant, Newborn; Surveys and Questionnaires
PubMed: 33557855
DOI: 10.1186/s12963-020-00226-z -
Journal of Clinical Epidemiology Jul 2018To compare and contrast different methods of qualitative evidence synthesis (QES) against criteria identified from the literature and to map their attributes to inform... (Comparative Study)
Comparative Study Review
OBJECTIVE
To compare and contrast different methods of qualitative evidence synthesis (QES) against criteria identified from the literature and to map their attributes to inform selection of the most appropriate QES method to answer research questions addressed by qualitative research.
STUDY DESIGN AND SETTING
Electronic databases, citation searching, and a study register were used to identify studies reporting QES methods. Attributes compiled from 26 methodological papers (2001-2014) were used as a framework for data extraction. Data were extracted into summary tables by one reviewer and then considered within the author team.
RESULTS
We identified seven considerations determining choice of methods from the methodological literature, encapsulated within the mnemonic Review question-Epistemology-Time/Timescale-Resources-Expertise-Audience and purpose-Type of data. We mapped 15 different published QES methods against these seven criteria. The final framework focuses on stand-alone QES methods but may also hold potential when integrating quantitative and qualitative data.
CONCLUSION
These findings offer a contemporary perspective as a conceptual basis for future empirical investigation of the advantages and disadvantages of different methods of QES. It is hoped that this will inform appropriate selection of QES approaches.
Topics: Data Collection; Evidence-Based Medicine; Qualitative Research; Systematic Reviews as Topic
PubMed: 29548841
DOI: 10.1016/j.jclinepi.2018.03.003 -
Trends in Cognitive Sciences Oct 2017Crowdsourcing data collection from research participants recruited from online labor markets is now common in cognitive science. We review who is in the crowd and who... (Review)
Review
Crowdsourcing data collection from research participants recruited from online labor markets is now common in cognitive science. We review who is in the crowd and who can be reached by the average laboratory. We discuss reproducibility and review some recent methodological innovations for online experiments. We consider the design of research studies and arising ethical issues. We review how to code experiments for the web, what is known about video and audio presentation, and the measurement of reaction times. We close with comments about the high levels of experience of many participants and an emerging tragedy of the commons.
Topics: Cognitive Science; Crowdsourcing; Data Collection; Humans; Reproducibility of Results
PubMed: 28803699
DOI: 10.1016/j.tics.2017.06.007 -
Mechanisms of Ageing and Development Sep 2020The interrogation of established, large-scale datasets presents great opportunities in health data science for the linkage and mining of potentially disparate resources... (Review)
Review
The interrogation of established, large-scale datasets presents great opportunities in health data science for the linkage and mining of potentially disparate resources to create new knowledge in a fast and cost-efficient manner. The number of datasets that can be queried in the field of multimorbidity is vast, ranging from national administrative and audit datasets, large clinical, technical and biological cohorts, through to more bespoke data collections made available by individual organisations and laboratories. However, with these opportunities also come technical and regulatory challenges that require an informed approach. In this review, we outline the potential benefits of using previously collected data as a vehicle for research activity. We illustrate the added value of combining potentially disparate datasets to find answers to novel questions in the field. We focus on the legal, governance and logistical considerations required to hold and analyse data acquired from disparate sources and outline some of the solutions to these challenges. We discuss the infrastructure resources required and the essential considerations in data curation and informatics management, and briefly discuss some of the analysis approaches currently used.
Topics: Data Collection; Datasets as Topic; Humans; Multimorbidity; Public Health Informatics
PubMed: 32622995
DOI: 10.1016/j.mad.2020.111310 -
JMIR Research Protocols May 2024The COVID-19 pandemic and the subsequent need for social distancing required the immediate pivoting of research modalities. Research that had previously been conducted...
BACKGROUND
The COVID-19 pandemic and the subsequent need for social distancing required the immediate pivoting of research modalities. Research that had previously been conducted in person had to pivot to remote data collection. Researchers had to develop data collection protocols that could be conducted remotely with limited or no evidence to guide the process. Therefore, the use of web-based platforms to conduct real-time research visits surged despite the lack of evidence backing these novel approaches.
OBJECTIVE
This paper aims to review the remote or virtual research protocols that have been used in the past 10 years, gather existing best practices, and propose recommendations for continuing to use virtual real-time methods when appropriate.
METHODS
Articles (n=22) published from 2013 to June 2023 were reviewed and analyzed to understand how researchers conducted virtual research that implemented real-time protocols. "Real-time" was defined as data collection with a participant through a live medium where a participant and research staff could talk to each other back and forth in the moment. We excluded studies for the following reasons: (1) studies that collected participant or patient measures for the sole purpose of engaging in a clinical encounter; (2) studies that solely conducted qualitative interview data collection; (3) studies that conducted virtual data collection such as surveys or self-report measures that had no interaction with research staff; (4) studies that described research interventions but did not involve the collection of data through a web-based platform; (5) studies that were reviews or not original research; (6) studies that described research protocols and did not include actual data collection; and (7) studies that did not collect data in real time, focused on telehealth or telemedicine, and were exclusively intended for medical and not research purposes.
RESULTS
Findings from studies conducted both before and during the COVID-19 pandemic suggest that many types of data can be collected virtually in real time. Results and best practice recommendations from the current protocol review will be used in the design and implementation of a substudy to provide more evidence for virtual real-time data collection over the next year.
CONCLUSIONS
Our findings suggest that virtual real-time visits are doable across a range of participant populations and can answer a range of research questions. Recommended best practices for virtual real-time data collection include (1) providing adequate equipment for real-time data collection, (2) creating protocols and materials for research staff to facilitate or guide participants through data collection, (3) piloting data collection, (4) iteratively accepting feedback, and (5) providing instructions in multiple forms. The implementation of these best practices and recommendations for future research are further discussed in the paper.
INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID)
DERR1-10.2196/53790.
Topics: Humans; COVID-19; Data Collection; Pandemics; Research Design; Telemedicine
PubMed: 38743477
DOI: 10.2196/53790 -
The International Journal of Artificial... Oct 2016Dialysis is a highly quantitative therapy involving large volumes of both clinical and technical data. While automated data collection has been implemented for chronic... (Review)
Review
PURPOSE
Dialysis is a highly quantitative therapy involving large volumes of both clinical and technical data. While automated data collection has been implemented for chronic dialysis, this has not been done for acute kidney injury patients treated with continuous renal replacement therapy (CRRT).
METHODS
After a brief review of the fundamental aspects of electronic medical records (EMRs), a new tool designed to provide clinicians with individualized CRRT treatment data is analyzed, with emphasis on its quality assurance capabilities.
RESULTS
The first platform addressing the problem of data collection and management with current CRRT machines (Sharesource system; Baxter Healthcare) is described. The system provides connectivity for the Prismaflex CRRT machine and enables both EMR connectivity and therapy analytics with 2 basic components: the connect module and the report module.
CONCLUSIONS
The enormous amount of data in CRRT should be collected and analyzed to enable adequate clinical decisions. Current CRRT technology presents significant limitations with consequent lack of rigorous analysis of technical data and relevant feedback. From a quality assurance perspective, these limitations preclude any systematic assessment of prescription and delivery trends that may be adversely affecting clinical outcomes. A detailed assessment of current practice limitations is provided together with several possible ways to address such limitations by a new technical tool.
Topics: Acute Kidney Injury; Data Collection; Humans; Renal Replacement Therapy
PubMed: 27748946
DOI: 10.5301/ijao.5000522