-
The Journal of the Royal College of... Sep 2023
Topics: Humans; Biomedical Research; Privacy; Information Dissemination
PubMed: 37293886
DOI: 10.1177/14782715231175001 -
Value in Health : the Journal of the... Sep 2023
Topics: Humans; Privacy; Confidentiality
PubMed: 37406963
DOI: 10.1016/j.jval.2023.06.013 -
Journal of Medical Internet Research May 2023The aging society posits new socioeconomic challenges to which a potential solution is active and assisted living (AAL) technologies. Visual-based sensing systems are... (Review)
Review
BACKGROUND
The aging society posits new socioeconomic challenges to which a potential solution is active and assisted living (AAL) technologies. Visual-based sensing systems are technologically among the most advantageous forms of AAL technologies in providing health and social care; however, they come at the risk of violating rights to privacy. With the immersion of video-based technologies, privacy-preserving smart solutions are being developed; however, the user acceptance research about these developments is not yet being systematized.
OBJECTIVE
With this scoping review, we aimed to gain an overview of existing studies examining the viewpoints of older adults and/or their caregivers on technology acceptance and privacy perceptions, specifically toward video-based AAL technology.
METHODS
A total of 22 studies were identified with a primary focus on user acceptance and privacy attitudes during a literature search of major databases. Methodological quality assessment and thematic analysis of the selected studies were executed and principal findings are summarized. The PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines were followed at every step of this scoping review.
RESULTS
Acceptance attitudes toward video-based AAL technologies are rather conditional, and are summarized into five main themes seen from the two end-user perspectives: caregiver and care receiver. With privacy being a major barrier to video-based AAL technologies, security and medical safety were identified as the major benefits across the studies.
CONCLUSIONS
This review reveals a very low methodological quality of the empirical studies assessing user acceptance of video-based AAL technologies. We propose that more specific and more end user- and real life-targeting research is needed to assess the acceptance of proposed solutions.
Topics: Aged; Humans; Aging; Attitude; Privacy; Technology
PubMed: 37126390
DOI: 10.2196/45297 -
Journal of the American Medical... Sep 2021Differential privacy is a relatively new method for data privacy that has seen growing use due its strong protections that rely on added noise. This study assesses the... (Review)
Review
OBJECTIVE
Differential privacy is a relatively new method for data privacy that has seen growing use due its strong protections that rely on added noise. This study assesses the extent of its awareness, development, and usage in health research.
MATERIALS AND METHODS
A scoping review was conducted by searching for ["differential privacy" AND "health"] in major health science databases, with additional articles obtained via expert consultation. Relevant articles were classified according to subject area and focus.
RESULTS
A total of 54 articles met the inclusion criteria. Nine articles provided descriptive overviews, 31 focused on algorithm development, 9 presented novel data sharing systems, and 8 discussed appraisals of the privacy-utility tradeoff. The most common areas of health research where differential privacy has been discussed are genomics, neuroimaging studies, and health surveillance with personal devices. Algorithms were most commonly developed for the purposes of data release and predictive modeling. Studies on privacy-utility appraisals have considered economic cost-benefit analysis, low-utility situations, personal attitudes toward sharing health data, and mathematical interpretations of privacy risk.
DISCUSSION
Differential privacy remains at an early stage of development for applications in health research, and accounts of real-world implementations are scant. There are few algorithms for explanatory modeling and statistical inference, particularly with correlated data. Furthermore, diminished accuracy in small datasets is problematic. Some encouraging work has been done on decision making with regard to epsilon. The dissemination of future case studies can inform successful appraisals of privacy and utility.
CONCLUSIONS
More development, case studies, and evaluations are needed before differential privacy can see widespread use in health research.
Topics: Algorithms; Confidentiality; Databases, Factual; Genomics; Privacy
PubMed: 34333623
DOI: 10.1093/jamia/ocab135 -
Current Opinion in Ophthalmology May 2022The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and... (Review)
Review
PURPOSE OF REVIEW
The application of artificial intelligence (AI) in medicine and ophthalmology has experienced exponential breakthroughs in recent years in diagnosis, prognosis, and aiding clinical decision-making. The use of digital data has also heralded the need for privacy-preserving technology to protect patient confidentiality and to guard against threats such as adversarial attacks. Hence, this review aims to outline novel AI-based systems for ophthalmology use, privacy-preserving measures, potential challenges, and future directions of each.
RECENT FINDINGS
Several key AI algorithms used to improve disease detection and outcomes include: Data-driven, imagedriven, natural language processing (NLP)-driven, genomics-driven, and multimodality algorithms. However, deep learning systems are susceptible to adversarial attacks, and use of data for training models is associated with privacy concerns. Several data protection methods address these concerns in the form of blockchain technology, federated learning, and generative adversarial networks.
SUMMARY
AI-applications have vast potential to meet many eyecare needs, consequently reducing burden on scarce healthcare resources. A pertinent challenge would be to maintain data privacy and confidentiality while supporting AI endeavors, where data protection methods would need to rapidly evolve with AI technology needs. Ultimately, for AI to succeed in medicine and ophthalmology, a balance would need to be found between innovation and privacy.
Topics: Artificial Intelligence; Humans; Natural Language Processing; Ophthalmology; Privacy; Technology
PubMed: 35266894
DOI: 10.1097/ICU.0000000000000846 -
Sensors (Basel, Switzerland) Oct 2022Intelligent transportation systems will play a key role in the smart cities of the future. In particular, railway transportation is gaining attention as a promising... (Review)
Review
Intelligent transportation systems will play a key role in the smart cities of the future. In particular, railway transportation is gaining attention as a promising solution to cope with the mobility challenges in large urban areas. Thanks to the miniaturisation of sensors and the deployment of fast data networks, the railway industry is being augmented with contextual, real-time information that opens the door to novel and personalised services. Despite the benefits of this digitalisation, the high complexity of railway transportation entails a number of challenges, particularly from security and privacy perspectives. Since railway assets are attractive targets for terrorism, coping with strong security and privacy requirements such as cryptography and privacy-preserving methods is of utmost importance. This article provides a thorough systematic literature review on information security and privacy within railway transportation systems, following the well-known methodology proposed by vom Brocke et al. We sketch out the most relevant studies and outline the main focuses, challenges and solutions described in the literature, considering technical, societal, regulatory and ethical approaches. Additionally, we discuss the remaining open issues and suggest several research lines that will gain relevance in the years to come.
Topics: Privacy; Computer Security; Transportation
PubMed: 36298049
DOI: 10.3390/s22207698 -
International Journal of Environmental... Sep 2022This article offers a brief overview of 'privacy-by-design (or data-protection-by-design) research environments', namely Trusted Research Environments (TREs, most... (Review)
Review
This article offers a brief overview of 'privacy-by-design (or data-protection-by-design) research environments', namely Trusted Research Environments (TREs, most commonly used in the United Kingdom) and Personal Health Trains (PHTs, most commonly used in mainland Europe). These secure environments are designed to enable the safe analysis of multiple, linked (and often big) data sources, including sensitive personal data and data owned by, and distributed across, different institutions. They take data protection and privacy requirements into account from the very start (conception phase, during system design) rather than as an afterthought or 'patch' implemented at a later stage on top of an existing environment. TREs and PHTs are becoming increasingly important for conducting large-scale privacy-preserving health research and for enabling federated learning and discoveries from big healthcare datasets. The paper also presents select examples of successful TRE and PHT implementations and of large-scale studies that used them.
Topics: Computer Security; Delivery of Health Care; Europe; Information Storage and Retrieval; Privacy
PubMed: 36231175
DOI: 10.3390/ijerph191911876 -
Yearbook of Medical Informatics Aug 2023Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from...
OBJECTIVES
Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health.
METHODS
We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy.
RESULTS
Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy.
CONCLUSIONS
In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.
Topics: Humans; Privacy; Machine Learning; Physicians
PubMed: 38147869
DOI: 10.1055/s-0043-1768731 -
Sensors (Basel, Switzerland) Aug 2021Addressing cyber and privacy risks has never been more critical for organisations. While a number of risk assessment methodologies and software tools are available, it...
Addressing cyber and privacy risks has never been more critical for organisations. While a number of risk assessment methodologies and software tools are available, it is most often the case that one must, at least, integrate them into a holistic approach that combines several appropriate risk sources as input to risk mitigation tools. In addition, cyber risk assessment primarily investigates cyber risks as the consequence of vulnerabilities and threats that threaten assets of the investigated infrastructure. In fact, cyber risk assessment is decoupled from privacy impact assessment, which aims to detect privacy-specific threats and assess the degree of compliance with data protection legislation. Furthermore, a Privacy Impact Assessment (PIA) is conducted in a proactive manner during the design phase of a system, combining processing activities and their inter-dependencies with assets, vulnerabilities, real-time threats and Personally Identifiable Information (PII) that may occur during the dynamic life-cycle of systems. In this paper, we propose a cyber and privacy risk management toolkit, called AMBIENT (Automated Cyber and Privacy Risk Management Toolkit) that addresses the above challenges by implementing and integrating three distinct software tools. AMBIENT not only assesses cyber and privacy risks in a thorough and automated manner but it also offers decision-support capabilities, to recommend optimal safeguards using the well-known repository of the Center for Internet Security (CIS) Controls. To the best of our knowledge, AMBIENT is the first toolkit in the academic literature that brings together the aforementioned capabilities. To demonstrate its use, we have created a case scenario based on information about cyber attacks we have received from a healthcare organisation, as a reference sector that faces critical cyber and privacy threats.
Topics: Computer Security; Privacy; Risk Assessment; Risk Management
PubMed: 34450935
DOI: 10.3390/s21165493 -
Human Molecular Genetics Oct 2021Debates surrounding genetic privacy have taken on different forms over the past 30 years. Taking genetic privacy to mean an interest that individuals, families, or even... (Review)
Review
Debates surrounding genetic privacy have taken on different forms over the past 30 years. Taking genetic privacy to mean an interest that individuals, families, or even communities have with respect to genetic information, we examine the metaphors used in these debates to chronicle the development of genetic privacy. In 1990-2000, we examine claims for ownership and of 'humanity' spurred by the launch of the Human Genome Project and related endeavors. In 2000-2010, we analyze the interface of law and ethics with research infrastructures such as biobanks, for which notions of citizenship and 'public goods' were central. In 2010-2020, we detail the relational turn of genetic privacy in response of large international research consortia and big data. Although each decade had its leading conceptions of genetic privacy, the subject is neither strictly chronological nor static. We conclude with reflections on the nature of genetic privacy and the necessity to bring together the unique and private genetic self with the human other.
Topics: Ethics, Clinical; Genetic Privacy; Human Genome Project; Humanities; Humans; Ownership
PubMed: 34155499
DOI: 10.1093/hmg/ddab164