-
BMC Genomics Dec 2022As the amount of genomic data continues to grow, there is an increasing need for systematic ways to organize, explore, compare, analyze and share this data. Despite...
BACKGROUND
As the amount of genomic data continues to grow, there is an increasing need for systematic ways to organize, explore, compare, analyze and share this data. Despite this, there is a lack of suitable platforms to meet this need.
RESULTS
OpenGenomeBrowser is a self-hostable, open-source platform to manage access to genomic data and drastically simplifying comparative genomics analyses. It enables users to interactively generate phylogenetic trees, compare gene loci, browse biochemical pathways, perform gene trait matching, create dot plots, execute BLAST searches, and access the data. It features a flexible user management system, and its modular folder structure enables the organization of genomic data and metadata, and to automate analyses. We tested OpenGenomeBrowser with bacterial, archaeal and yeast genomes. We provide a docker container to make installation and hosting simple. The source code, documentation, tutorials for OpenGenomeBrowser are available at opengenomebrowser.github.io and a demo server is freely accessible at opengenomebrowser.bioinformatics.unibe.ch .
CONCLUSIONS
To our knowledge, OpenGenomeBrowser is the first self-hostable, database-independent comparative genome browser. It drastically simplifies commonly used bioinformatics workflows and enables convenient as well as fast data exploration.
Topics: Phylogeny; Data Management; Genomics; Genome; Computational Biology; Software
PubMed: 36575383
DOI: 10.1186/s12864-022-09086-3 -
BMC Research Notes Jan 2022Research data management (RDM) is the cornerstone of a successful research project, and yet it often remains an underappreciated art that gets overlooked in the hustle...
Research data management (RDM) is the cornerstone of a successful research project, and yet it often remains an underappreciated art that gets overlooked in the hustle and bustle of everyday project management even when required by funding bodies. If researchers are to strive for reproducible science that adheres to the principles of FAIR, then they need to manage the data associated with their research projects effectively. It is imperative to plan your RDM strategies early on, and setup your project organisation before embarking on the work. There are several different factors to consider: data management plans, data organisation and storage, publishing and sharing your data, ensuring reproducibility and adhering to data standards. Additionally it is important to reflect upon the ethical implications that might need to be planned for, and adverse issues that may need a mitigation strategy. This short article discusses these different areas, noting some best practices and detailing how to incorporate these strategies into your work. Finally, the article ends with a set of top ten tips for effective research data management.
Topics: Data Management; Humans; Publishing; Reproducibility of Results; Research Design; Research Personnel
PubMed: 35063017
DOI: 10.1186/s13104-022-05908-5 -
Developmental Cognitive Neuroscience Oct 2020The YOUth cohort study aims to be a trailblazer for open science. Being a large-scale, longitudinal cohort following children in their development from gestation until...
The YOUth cohort study aims to be a trailblazer for open science. Being a large-scale, longitudinal cohort following children in their development from gestation until early adulthood, YOUth collects a vast amount of data through a variety of research techniques. Data are collected through multiple platforms, including facilities managed by Utrecht University and the University Medical Center Utrecht. In order to facilitate appropriate use of its data by research organizations and researchers, YOUth aims to produce high-quality, FAIR data while safeguarding the privacy of participants. This requires an extensive data infrastructure, set up by collaborative efforts of researchers, data managers, IT departments, and the Utrecht University Library. In the spirit of open science, YOUth will share its experience and expertise in setting up a high-quality research data infrastructure for sensitive cohort data. This paper describes the technical aspects of our data and data infrastructure, and the steps taken throughout the study to produce and safely store FAIR and high-quality data. Finally, we will reflect on the organizational aspects that are conducive to the success of setting up such an enterprise, and we consider the financial challenges posed by individual studies investing in sustainable science.
Topics: Adolescent; Child; Child, Preschool; Cohort Studies; Data Management; Female; Humans; Infant; Infant, Newborn; Longitudinal Studies; Male; Research Design
PubMed: 32906086
DOI: 10.1016/j.dcn.2020.100834 -
Frontiers in Public Health 2023Non-Fungible Tokens (NFTs) are digital assets that are verified using blockchain technology to ensure authenticity and ownership. NFTs have the potential to...
INTRODUCTION
Non-Fungible Tokens (NFTs) are digital assets that are verified using blockchain technology to ensure authenticity and ownership. NFTs have the potential to revolutionize healthcare by addressing various issues in the industry.
METHOD
The goal of this study was to identify the applications of NFTs in healthcare. Our scoping review was conducted in 2023. We searched the Scopus, IEEE, PubMed, Web of Science, Science Direct, and Cochrane scientific databases using related keywords. The article selection process was based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).
RESULTS
After applying inclusion and exclusion criteria, a total of 13 articles were chosen. Then extracted data was summarized and reported. The most common application of NFTs in healthcare was found to be in health data management with 46% frequency, followed by supply chain management with 31% frequency. Furthermore, Ethereum is the main blockchain platform that is applied in NFTs in healthcare with 70%.
DISCUSSION
The findings from this review indicate that the NFTs that are currently used in healthcare could transform it. Also, it appears that researchers have not yet investigated the numerous potentials uses of NFTs in the healthcare field, which could be utilized in the future.
Topics: Humans; Data Management; Databases, Factual; Industry; Research Personnel; Technology
PubMed: 38074727
DOI: 10.3389/fpubh.2023.1266385 -
Water Science and Technology : a... Jul 2023To assess the environmental impact of wastewater treatment, life cycle assessment (LCA) is a frequently applied instrument. However, these studies often require large...
To assess the environmental impact of wastewater treatment, life cycle assessment (LCA) is a frequently applied instrument. However, these studies often require large amounts of data. The complexity and heterogeneity of these data result in the need for a systematic data management approach. Especially the generation of the life cycle inventory (LCI) holds the potential to be facilitated by automation. A case study in the wastewater sector was used to demonstrate the implementation of data management. A database structure was developed to store the raw data of the wastewater plants (WWTPs) and make it accessible through code. The code interacted with the database, implemented calculations, and automatically created the inventory based on the processed data. The database provides a consistent structure for the raw data and can also be used for backup purposes. Because it is machine-readable it can be accessed through the code that enables the automated generation of the LCI. As a proof of concept, a sequence of the code is provided with a user interface and can be tested online. We found that for most use cases, basic programming tools were sufficient for systematic data management, and, therefore, the approach is considered accessible for LCA practitioners.
Topics: Animals; Wastewater; Data Management; Environment; Water Purification; Life Cycle Stages
PubMed: 37452538
DOI: 10.2166/wst.2023.200 -
Journal of Medical Internet Research Sep 2020This study aims to review current issues regarding the application of blockchain technology in health care. We illustrated the various ways in which blockchain can solve...
This study aims to review current issues regarding the application of blockchain technology in health care. We illustrated the various ways in which blockchain can solve current health care issues in three main arenas: data exchange, contracts, and supply chain management. This paper presents several current and projected uses of blockchain technology in the health care industry. We predicted which of these applications are likely to be adopted quickly and provided a supply chain example of tracking the transportation of organs for transplantation.
Topics: Blockchain; Data Management; Delivery of Health Care; Humans
PubMed: 32940618
DOI: 10.2196/17423 -
Journal of Biomedical Informatics Jun 2023The clinical documentation of cystoscopy includes visual and textual materials. However, the secondary use of visual cystoscopic data for educational and research...
BACKGROUND
The clinical documentation of cystoscopy includes visual and textual materials. However, the secondary use of visual cystoscopic data for educational and research purposes remains limited due to inefficient data management in routine clinical practice.
METHODS
A conceptual framework was designed to document cystoscopy in a standardized manner with three major sections: data management, annotation management, and utilization management. A Swiss-cheese model was proposed for quality control and root cause analyses. We defined the infrastructure required to implement the framework with respect to FAIR (findable, accessible, interoperable, reusable) principles. We applied two scenarios exemplifying data sharing for research and educational projects to ensure compliance with FAIR principles.
RESULTS
The framework was successfully implemented while following FAIR principles. The cystoscopy atlas produced from the framework could be presented in an educational web portal; a total of 68 full-length qualitative videos and corresponding annotation data were sharable for artificial intelligence projects covering frame classification and segmentation problems at case, lesion, and frame levels.
CONCLUSION
Our study shows that the proposed framework facilitates the storage of visual documentation in a standardized manner and enables FAIR data for education and artificial intelligence research.
Topics: Artificial Intelligence; Documentation; Data Management
PubMed: 37088456
DOI: 10.1016/j.jbi.2023.104369 -
Briefings in Bioinformatics Sep 2019Deoxyribonuclease I (DNase I)-hypersensitive site sequencing (DNase-seq) has been widely used to determine chromatin accessibility and its underlying regulatory lexicon.... (Review)
Review
Deoxyribonuclease I (DNase I)-hypersensitive site sequencing (DNase-seq) has been widely used to determine chromatin accessibility and its underlying regulatory lexicon. However, exploring DNase-seq data requires sophisticated downstream bioinformatics analyses. In this study, we first review computational methods for all of the major steps in DNase-seq data analysis, including experimental design, quality control, read alignment, peak calling, annotation of cis-regulatory elements, genomic footprinting and visualization. The challenges associated with each step are highlighted. Next, we provide a practical guideline and a computational pipeline for DNase-seq data analysis by integrating some of these tools. We also discuss the competing techniques and the potential applications of this pipeline for the analysis of analogous experimental data. Finally, we discuss the integration of DNase-seq with other functional genomics techniques.
Topics: Computational Biology; DNA Footprinting; Data Management; Deoxyribonuclease I; Quality Control; Sequence Analysis, DNA
PubMed: 30010713
DOI: 10.1093/bib/bby057 -
Italian Journal of Dermatology and... Apr 2021Born in 1995, teledermatology (TD) turns 25 years old today. Since then, TD evolved according to patients and physicians needs. The present review aimed to summarize all... (Review)
Review
Born in 1995, teledermatology (TD) turns 25 years old today. Since then, TD evolved according to patients and physicians needs. The present review aimed to summarize all the efforts and experiences carried out in the field of TD and its subspecialties, the evolution and the future perspectives. A literature search was conducted in PubMed and Google Scholar. The state of the art of the "tele-dermo research" included TD and clinical trials, TD/TDS web platforms, TDS and artificial intelligence studies. Finally, the future perspective of TD/TDS in the era of social distancing was discussed. Using TD in specific situations adds several benefits including time-effectiveness of intervention and reduction in the waiting time for the first visit, reduced travel-costs, reduced sanitary costs, equalization of access from patient to specialistic consult. The communication technologies devices currently available can adequately support the growing needs of tele-assistance. A main limit is the current lack of a common clear European regulation for practicing TD, encompassing privacy issues and data management. The pandemic lockdown of 2020 has highlighted the importance of performing TD for all those patient, elderly and/or fragile, where the alternative would be no care at all. Many efforts are needed to develop efficient workflows and TD programs to facilitate the interplay among the different TD actors, along with practice guidelines or position statements.
Topics: Adult; Aged; Artificial Intelligence; Data Management; Humans; Privacy
PubMed: 33960751
DOI: 10.23736/S2784-8671.21.06731-6 -
Applied Clinical Informatics Mar 2024Clinical research, particularly in scientific data, grapples with the efficient management of multimodal and longitudinal clinical data. Especially in neuroscience,...
BACKGROUND
Clinical research, particularly in scientific data, grapples with the efficient management of multimodal and longitudinal clinical data. Especially in neuroscience, the volume of heterogeneous longitudinal data challenges researchers. While current research data management systems offer rich functionality, they suffer from architectural complexity that makes them difficult to install and maintain and require extensive user training.
OBJECTIVES
The focus is the development and presentation of a data management approach specifically tailored for clinical researchers involved in active patient care, especially in the neuroscientific environment of German university hospitals. Our design considers the implementation of FAIR (Findable, Accessible, Interoperable, and Reusable) principles and the secure handling of sensitive data in compliance with the General Data Protection Regulation.
METHODS
We introduce a streamlined database concept, featuring an intuitive graphical interface built on Hypertext Markup Language revision 5 (HTML5)/Cascading Style Sheets (CSS) technology. The system can be effortlessly deployed within local networks, that is, in Microsoft Windows 10 environments. Our design incorporates FAIR principles for effective data management. Moreover, we have streamlined data interchange through established standards like HL7 Clinical Document Architecture (CDA). To ensure data integrity, we have integrated real-time validation mechanisms that cover data type, plausibility, and Clinical Quality Language logic during data import and entry.
RESULTS
We have developed and evaluated our concept with clinicians using a sample dataset of subjects who visited our memory clinic over a 3-year period and collected several multimodal clinical parameters. A notable advantage is the unified data matrix, which simplifies data aggregation, anonymization, and export. THIS STREAMLINES DATA EXCHANGE AND ENHANCES DATABASE INTEGRATION WITH PLATFORMS LIKE KONSTANZ INFORMATION MINER (KNIME): .
CONCLUSION
Our approach offers a significant advancement for capturing and managing clinical research data, specifically tailored for small-scale initiatives operating within limited information technology (IT) infrastructures. It is designed for immediate, hassle-free deployment by clinicians and researchers.The database template and precompiled versions of the user interface are available at: https://github.com/stebro01/research_database_sqlite_i2b2.git.
Topics: Humans; Data Management; Programming Languages
PubMed: 38301729
DOI: 10.1055/a-2259-0008