-
International Cybersecurity Law Review 2022Touchless technology often called Zero User Interface (UI) has begun to permeate every aspect of our lives as its use became necessary for hygiene measures in public...
Touchless technology often called Zero User Interface (UI) has begun to permeate every aspect of our lives as its use became necessary for hygiene measures in public places. The evolution of touchless technology replacing touchscreen interaction started as a luxury concept to give a fancier look to digital interactions, but now it has gained real value as a health-oriented interaction method. Switching to a touchless interface reduces common touchpoints, which help to safeguard against the spread of pathogens. Although the evolution of touchless technology is not new, its use massively increased due to its inherent hygienic nature during the COVID-19 pandemic. However, this investment in a new form of digital interaction has several privacy and security issues that need attention, in order to allow for safe human-machine interaction to cope with security breaches and cyber-attacks to protect our credentials. This paper outlines the potential security and privacy issues concerning Zero UI adoption in various technologies that need to be considered if one wishes to adopt responsible technology practices with this technology.
PubMed: 37521506
DOI: 10.1365/s43439-022-00052-z -
JMIR MHealth and UHealth Mar 2021The use of wearables facilitates data collection at a previously unobtainable scale, enabling the construction of complex predictive models with the potential to improve... (Review)
Review
BACKGROUND
The use of wearables facilitates data collection at a previously unobtainable scale, enabling the construction of complex predictive models with the potential to improve health. However, the highly personal nature of these data requires strong privacy protection against data breaches and the use of data in a way that users do not intend. One method to protect user privacy while taking advantage of sharing data across users is federated learning, a technique that allows a machine learning model to be trained using data from all users while only storing a user's data on that user's device. By keeping data on users' devices, federated learning protects users' private data from data leaks and breaches on the researcher's central server and provides users with more control over how and when their data are used. However, there are few rigorous studies on the effectiveness of federated learning in the mobile health (mHealth) domain.
OBJECTIVE
We review federated learning and assess whether it can be useful in the mHealth field, especially for addressing common mHealth challenges such as privacy concerns and user heterogeneity. The aims of this study are to describe federated learning in an mHealth context, apply a simulation of federated learning to an mHealth data set, and compare the performance of federated learning with the performance of other predictive models.
METHODS
We applied a simulation of federated learning to predict the affective state of 15 subjects using physiological and motion data collected from a chest-worn device for approximately 36 minutes. We compared the results from this federated model with those from a centralized or server model and with the results from training individual models for each subject.
RESULTS
In a 3-class classification problem using physiological and motion data to predict whether the subject was undertaking a neutral, amusing, or stressful task, the federated model achieved 92.8% accuracy on average, the server model achieved 93.2% accuracy on average, and the individual model achieved 90.2% accuracy on average.
CONCLUSIONS
Our findings support the potential for using federated learning in mHealth. The results showed that the federated model performed better than a model trained separately on each individual and nearly as well as the server model. As federated learning offers more privacy than a server model, it may be a valuable option for designing sensitive data collection methods.
Topics: Computer Simulation; Humans; Machine Learning; Privacy; Research Design; Telemedicine
PubMed: 33783362
DOI: 10.2196/23728 -
International Journal of Bipolar... Dec 2017While e-health initiatives are poised to revolutionize delivery and access to mental health care, conducting clinical research online involves specific contextual and... (Review)
Review
While e-health initiatives are poised to revolutionize delivery and access to mental health care, conducting clinical research online involves specific contextual and ethical considerations. Face-to-face psychosocial interventions can at times entail risk and have adverse psychoactive effects, something true for online mental health programs too. Risks associated with and specific to internet psychosocial interventions include potential breaches of confidentiality related to online communications (such as unencrypted email), data privacy and security, risks of self-selection and self-diagnosis as well as the shortcomings of receiving psychoeducation and treatment at distance from an impersonal website. Such ethical issues need to be recognized and proactively managed in website and study design as well as treatment implementation. In order for online interventions to succeed, risks and expectations of all involved need to be carefully considered with a focus on ethical integrity.
PubMed: 28480484
DOI: 10.1186/s40345-017-0095-3 -
JMIR Human Factors Jul 2023Patient portals can facilitate patient engagement in care management. Driven by national efforts over the past decade, patient portals are being implemented by hospitals...
Adult Patients' Experiences of Using a Patient Portal With a Focus on Perceived Benefits and Difficulties, and Perceptions on Privacy and Security: Qualitative Descriptive Study.
BACKGROUND
Patient portals can facilitate patient engagement in care management. Driven by national efforts over the past decade, patient portals are being implemented by hospitals and clinics nationwide. Continuous evaluation of patient portals and reflection of feedback from end users across care settings are needed to make patient portals more user-centered after the implementation.
OBJECTIVE
The aim of this study was to investigate the lived experience of using a patient portal in adult patients recruited from a variety of care settings, focusing on their perceived benefits and difficulties of using the patient portal, and trust and concerns about privacy and security.
METHODS
This qualitative descriptive study was part of a cross-sectional digital survey research to examine the comprehensive experience of using a patient portal in adult patients recruited from 20 care settings from hospitals and clinics of a large integrated health care system in the mid-Atlantic area of the United States. Those who had used a patient portal offered by the health care system in the past 12 months were eligible to participate in the survey. Data collected from 734 patients were subjected to descriptive statistics and content analysis.
RESULTS
The majority of the participants were female and non-Hispanic White with a mean age of 53.1 (SD 15.34) years. Content analysis of 1589 qualitative comments identified 22 themes across 4 topics: beneficial aspects (6 themes) and difficulties (7 themes) in using the patient portal; trust (5 themes) and concerns (4 themes) about privacy and security of the patient portal. Most of the participants perceived the patient portal functions as beneficial for communicating with health care teams and monitoring health status and care activities. At the same time, about a quarter of them shared difficulties they experienced while using those functions, including not getting eMessage responses timely and difficulty finding information in the portal. Protected log-in process and trust in health care providers were the most mentioned reasons for trusting privacy and security of the patient portal. The most mentioned reason for concerns about privacy and security was the risk of data breaches such as hacking attacks and identity theft.
CONCLUSIONS
This study provides an empirical understanding of the lived experience of using a patient portal in adult patient users across care settings with a focus on the beneficial aspects and difficulties in using the patient portal, and trust and concerns about privacy and security. Our study findings can serve as a valuable reference for health care institutions and software companies to implement more user-centered, secure, and private patient portals. Future studies may consider targeting other patient portal programs and patients with infrequent or nonuse of patient portals.
PubMed: 37490316
DOI: 10.2196/46044 -
Journal of Religion and Health Aug 2022The Sharī'ah affords considerable concern for human emotions, with its rulings seeking to remove the deliberate and accidental types of harm that may be inflicted on...
The Sharī'ah affords considerable concern for human emotions, with its rulings seeking to remove the deliberate and accidental types of harm that may be inflicted on individuals or society. The principle of medical confidentiality protects patients' dignity and avoids potential harm if otherwise practised. Texts from the Quran and Sunnah substantiate that unjustified disclosure of secrets is prohibited and whoever breaches confidentiality is to be punished. This paper explores the origins of Islamic ethical framework vis-à-vis dealing with privacy, particularly confidential information acquired by health professionals. For that, this paper attempts to explore various āyāt (Quranic verses) and aḥādīth (Prophetic traditions) related to privacy, and thus to analogically deduct various aspects of confidentiality in the context of medical ethics. As a result, it aims to discourse on key principles of medical confidentiality from an Islamic juristic perspective, discussing its types and conditions.
Topics: Confidentiality; Disclosure; Ethics, Medical; Humans; Islam; Privacy
PubMed: 34181205
DOI: 10.1007/s10943-021-01313-7 -
Yearbook of Medical Informatics Aug 2020To provide an introduction to the 2020 International Medical Informatics Association (IMIA) Yearbook by the editors.
OBJECTIVES
To provide an introduction to the 2020 International Medical Informatics Association (IMIA) Yearbook by the editors.
METHODS
This editorial provides an introduction and overview to the 2020 IMIA Yearbook which special topic is: "Ethics in Health Informatics". The keynote paper, the survey paper of the Special Topic section, and the paper about Donald Lindberg's ethical scientific openness in the History of Medical Informatics chapter of the Yearbook are discussed. Changes in the Yearbook Editorial Committee are also described.
RESULTS
Inspired by medical ethics, ethics in health informatics progresses with the advances in biomedical informatics. With the wide use of EHRs, the enlargement of the care team perimeter, the need for data sharing for care continuity, the reuse of data for the sake of research, and the implementation of AI-powered decision support tools, new ethics requirements are necessary to address issues such as threats on privacy, confidentiality breaches, poor security practices, lack of patient information, tension on data sharing and reuse policies, need for more transparency on apps effectiveness, biased algorithms with discriminatory outcomes, guarantee on trustworthy AI, concerns on the re-identification of de-identified data.
CONCLUSIONS
Despite privacy rules rooted in the Health Insurance Portability and Accountability Act of 1996 (HIPAA) in the USA and even more restrictive new regulations such as the EU General Data Protection Regulation published in May 2018, some people do not believe their data will be kept confidential and may not share sensitive information with a provider, which may also induce unethical situations. Transparency on healthcare data processes is a condition of healthcare professionals' and patients' trust and their adoption of digital tools.
Topics: Artificial Intelligence; Attitude of Health Personnel; Attitude to Health; Bioethical Issues; Health Personnel; Humans; Medical Informatics; Trust
PubMed: 32823296
DOI: 10.1055/s-0040-1702029 -
Medical Image Analysis Feb 2024Artificial intelligence (AI) has a multitude of applications in cancer research and oncology. However, the training of AI systems is impeded by the limited availability...
Artificial intelligence (AI) has a multitude of applications in cancer research and oncology. However, the training of AI systems is impeded by the limited availability of large datasets due to data protection requirements and other regulatory obstacles. Federated and swarm learning represent possible solutions to this problem by collaboratively training AI models while avoiding data transfer. However, in these decentralized methods, weight updates are still transferred to the aggregation server for merging the models. This leaves the possibility for a breach of data privacy, for example by model inversion or membership inference attacks by untrusted servers. Somewhat-homomorphically-encrypted federated learning (SHEFL) is a solution to this problem because only encrypted weights are transferred, and model updates are performed in the encrypted space. Here, we demonstrate the first successful implementation of SHEFL in a range of clinically relevant tasks in cancer image analysis on multicentric datasets in radiology and histopathology. We show that SHEFL enables the training of AI models which outperform locally trained models and perform on par with models which are centrally trained. In the future, SHEFL can enable multiple institutions to co-train AI models without forsaking data governance and without ever transmitting any decryptable data to untrusted servers.
Topics: Humans; Artificial Intelligence; Learning; Neoplasms; Image Processing, Computer-Assisted; Radiology
PubMed: 38104402
DOI: 10.1016/j.media.2023.103059 -
Sensors (Basel, Switzerland) Jul 2022Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This...
Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.94 for ε=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training.
Topics: Humans; Neural Networks, Computer; Privacy; ROC Curve; Radiography; X-Rays
PubMed: 35890875
DOI: 10.3390/s22145195 -
Journal of Medical Internet Research Feb 2021Health care professionals are caught between the wish of patients to speed up health-related communication via emails and the need for protecting health information.
BACKGROUND
Health care professionals are caught between the wish of patients to speed up health-related communication via emails and the need for protecting health information.
OBJECTIVE
We aimed to analyze the demographic characteristics of patients providing an email, and study the distribution of emails' domain names.
METHODS
We used the information system of the European Hospital Georges Pompidou (HEGP) to identify patients who provided an email address. We used a 1:1 matching strategy to study the demographic characteristics of the patients associated with the presence of an email, and described the characteristics of the emails used (in terms of types of emails-free, business, and personal).
RESULTS
Overall, 4.22% (41,004/971,822) of the total population of patients provided an email address. The year of last contact with the patient is the strongest driver of the presence of an email address (odds ratio [OR] 20.8, 95% CI 18.9-22.9). Patients more likely to provide an email address were treated for chronic conditions and were more likely born between 1950 and 1969 (taking patients born before 1950 as reference [OR 1.60, 95% CI 1.54-1.67], and compared to those born after 1990 [OR 0.56, 95% CI 0.53-0.59]). Of the 41,004 email addresses collected, 37,779 were associated with known email providers, 31,005 email addresses were associated with Google, Microsoft, Orange, and Yahoo!, 2878 with business emails addresses, and 347 email addresses with personalized domain names.
CONCLUSIONS
Emails have been collected only recently in our institution. The importance of the year of last contact probably reflects this recent change in contact information collection policy. The demographic characteristics and especially the age distribution are likely the result of a population bias in the hospital: patients providing email are more likely to be treated for chronic diseases. A risk analysis of the use of email revealed several situations that could constitute a breach of privacy that is both likely and with major consequences. Patients treated for chronic diseases are more likely to provide an email address, and are also more at risk in case of privacy breach. Several common situations could expose their private information. We recommend a very restrictive use of the emails for health communication.
Topics: Case-Control Studies; Computer Security; Electronic Mail; Epidemiologic Studies; Female; France; Hospitals, University; Humans; Male
PubMed: 33625375
DOI: 10.2196/13992 -
Frontiers in Genetics 2020The existing literature has not examined how Chinese direct-to-consumer (DTC) genetic testing providers navigate the issues of informed consent, privacy, and data...
BACKGROUND
The existing literature has not examined how Chinese direct-to-consumer (DTC) genetic testing providers navigate the issues of informed consent, privacy, and data protection associated with testing services. This research aims to explore these questions by examining the relevant documents and messages published on websites of the Chinese DTC genetic test providers.
METHODS
Using Baidu.com, the most popular Chinese search engine, we compiled the websites of providers who offer genetic testing services and analyzed available documents related to informed consent, the terms of services, and the privacy policy. The analyses were guided by the following inquiries as they applied to each DTC provider: the methods available for purchasing testing products; the methods providers used to obtain informed consent; privacy issues and measures for protecting consumers' health information; the policy for third-party data sharing; consumers right to their data; and the liabilities in the event of a data breach.
RESULTS
68.7% of providers offer multiple channels for purchasing genetic testing products, and that social media has become a popular platform to promote testing services. Informed consent forms are not available on 94% of providers' websites and a privacy policy is only offered by 45.8% of DTC genetic testing providers. Thirty-nine providers stated that they used measures to protect consumers' information, of which, 29 providers have distinguished consumers' general personal information from their genetic information. In 33.7% of the cases examined, providers stated that with consumers' explicit permission, they could reuse and share the clients' information for non-commercial purposes. Twenty-three providers granted consumer rights to their health information, with the most frequently mentioned right being the consumers' right to decide how their data can be used by providers. Lastly, 21.7% of providers clearly stated their liabilities in the event of a data breach, placing more emphasis on the providers' exemption from any liability.
CONCLUSIONS
Currently, the Chinese DTC genetic testing business is running in a regulatory vacuum, governed by self-regulation. The government should develop a comprehensive legal framework to regulate DTC genetic testing offerings. Regulatory improvements should be made based on periodical reviews of the supervisory strategy to meet the rapid development of the DTC genetic testing industry.
PubMed: 32425986
DOI: 10.3389/fgene.2020.00416