-
Indian Journal of Medical Ethics 2023Mobile phone-based interventions are being increasingly used in community health work in India. The extensive use of mobile phones in community health work is associated... (Review)
Review
BACKGROUND
Mobile phone-based interventions are being increasingly used in community health work in India. The extensive use of mobile phones in community health work is associated with several ethical issues. This review was conducted to identify the ethical issues related to mHealth applications in community health work in India.
METHODS
We performed a scoping review of literature in PubMed and Google Scholar using a search strategy that we developed. We included studies that mentioned ethical issues in mHealth applications that involved community health work and community health workers in India, published in peer reviewed English language journals between 2011 and 2021. All three authors screened the articles, shortlisted them, read them, and extracted the data. We then synthesised the data into a conceptual framework.
RESULTS
Our search yielded 1125 papers, from which we screened and shortlisted 121, after reading which we included 58 in the final scoping review. The main ethical issues identified from review of these papers included benefits of mHealth applications such as improved quality of care, increased awareness about health and illness, increased accountability of the health system, accurate data capture and timely data driven decision making. The risks of mHealth applications identified were impersonal communication of community health worker, increased workload, potential breach in privacy, confidentiality, and stigmatisation. The inherent inequities in access to mobile phones in the community due to gender and class led to exclusion of women and the poor from the benefits of mHealth interventions. Though mHealth interventions increased access to healthcare by taking healthcare to remote areas through tele-health, unless we contextualise mHealth to local rural settings through community engagement, it is likely to remain inequitable.
CONCLUSION
This scoping review revealed that there is a lack of well conducted empirical studies which explore the ethical issues related to mHealth applications in community health work.
Topics: Humans; Female; Public Health; Delivery of Health Care; Cell Phone; Telemedicine; India; Mobile Applications
PubMed: 37310008
DOI: 10.20529/IJME.2023.037 -
Humanities & Social Sciences... 2023Personal physiological data is the digital representation of physical features that identify individuals in the Internet of Everything environment. Such data includes...
Personal physiological data is the digital representation of physical features that identify individuals in the Internet of Everything environment. Such data includes characteristics of uniqueness, identification, replicability, irreversibility of damage, and relevance of information, and this data can be collected, shared, and used in a wide range of applications. As facial recognition technology has become prevalent and smarter over time, facial data associated with critical personal information poses a potential security and privacy risk of being leaked in the Internet of Everything application platform. However, current research has not identified a systematic and effective method for identifying these risks. Thus, in this study, we adopted the fault tree analysis method to identify risks. Based on the risks identified, we then listed intermediate events and basic events according to the causal logic, and drew a complete fault tree diagram of facial data breaches. The study determined that personal factors, data management and supervision absence are the three intermediate events. Furthermore, the lack of laws and regulations and the immaturity of facial recognition technology are the two major basic events leading to facial data breaches. We anticipate that this study will explain the manageability and traceability of personal physiological data during its lifecycle. In addition, this study contributes to an understanding of what risks physiological data faces in order to inform individuals of how to manage their data carefully and to guide management parties on how to formulate robust policies and regulations that can ensure data security.
PubMed: 37192941
DOI: 10.1057/s41599-023-01673-3 -
Surgical Endoscopy Aug 2023Laparoscopic videos are increasingly being used for surgical artificial intelligence (AI) and big data analysis. The purpose of this study was to ensure data privacy in...
BACKGROUND
Laparoscopic videos are increasingly being used for surgical artificial intelligence (AI) and big data analysis. The purpose of this study was to ensure data privacy in video recordings of laparoscopic surgery by censoring extraabdominal parts. An inside-outside-discrimination algorithm (IODA) was developed to ensure privacy protection while maximizing the remaining video data.
METHODS
IODAs neural network architecture was based on a pretrained AlexNet augmented with a long-short-term-memory. The data set for algorithm training and testing contained a total of 100 laparoscopic surgery videos of 23 different operations with a total video length of 207 h (124 min ± 100 min per video) resulting in 18,507,217 frames (185,965 ± 149,718 frames per video). Each video frame was tagged either as abdominal cavity, trocar, operation site, outside for cleaning, or translucent trocar. For algorithm testing, a stratified fivefold cross-validation was used.
RESULTS
The distribution of annotated classes were abdominal cavity 81.39%, trocar 1.39%, outside operation site 16.07%, outside for cleaning 1.08%, and translucent trocar 0.07%. Algorithm training on binary or all five classes showed similar excellent results for classifying outside frames with a mean F1-score of 0.96 ± 0.01 and 0.97 ± 0.01, sensitivity of 0.97 ± 0.02 and 0.0.97 ± 0.01, and a false positive rate of 0.99 ± 0.01 and 0.99 ± 0.01, respectively.
CONCLUSION
IODA is able to discriminate between inside and outside with a high certainty. In particular, only a few outside frames are misclassified as inside and therefore at risk for privacy breach. The anonymized videos can be used for multi-centric development of surgical AI, quality management or educational purposes. In contrast to expensive commercial solutions, IODA is made open source and can be improved by the scientific community.
Topics: Humans; Artificial Intelligence; Privacy; Laparoscopy; Algorithms; Neural Networks, Computer; Video Recording
PubMed: 37145173
DOI: 10.1007/s00464-023-10078-x -
Applied Clinical Informatics Mar 2023The 21st Century Cures Act information blocking final rule mandated the immediate and electronic release of health care data in 2020. There is anecdotal concern that a...
BACKGROUND
The 21st Century Cures Act information blocking final rule mandated the immediate and electronic release of health care data in 2020. There is anecdotal concern that a significant amount of information is documented in notes that would breach adolescent confidentiality if released electronically to a guardian.
OBJECTIVES
The purpose of this study was to quantify the prevalence of confidential information, based on California laws, within progress notes for adolescent patients that would be released electronically and assess differences in prevalence across patient demographics.
METHODS
This is a single-center retrospective chart review of outpatient progress notes written between January 1, 2016, and December 31, 2019, at a large suburban academic pediatric network. Notes were labeled into one of three confidential domains by five expert reviewers trained on a rubric defining confidential information for adolescents derived from California state law. Participants included a random sampling of eligible patients aged 12 to 17 years old at the time of note creation. Secondary analysis included prevalence of confidentiality across age, gender, language spoken, and patient race.
RESULTS
Of 1,200 manually reviewed notes, 255 notes (21.3%) (95% confidence interval: 19-24%) contained confidential information. There was a similar distribution among gender and age and a majority of English speaking (83.9%) and white or Caucasian patients (41.2%) in the cohort. Confidential information was more likely to be found in notes for females ( < 0.05) as well as for English-speaking patients ( < 0.05). Older patients had a higher probability of notes containing confidential information ( < 0.05).
CONCLUSION
This study demonstrates that there is a significant risk to breach adolescent confidentiality if historical progress notes are released electronically to proxies without further review or redaction. With increased sharing of health care data, there is a need to protect the privacy of the adolescents and prevent potential breaches of confidentiality.
Topics: Female; Humans; Adolescent; Child; Prevalence; Retrospective Studies; Confidentiality; Privacy; Health Facilities
PubMed: 37137339
DOI: 10.1055/s-0043-1767682 -
Heliyon Apr 2023Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments... (Review)
Review
INTRODUCTION
Artificial intelligence (AI) applications in healthcare and medicine have increased in recent years. To enable access to personal data, Trusted Research Environments (TREs) (otherwise known as Safe Havens) provide safe and secure environments in which researchers can access sensitive personal data and develop AI (in particular machine learning (ML)) models. However, currently few TREs support the training of ML models in part due to a gap in the practical decision-making guidance for TREs in handling model disclosure. Specifically, the training of ML models creates a need to disclose new types of outputs from TREs. Although TREs have clear policies for the disclosure of statistical outputs, the extent to which trained models can leak personal training data once released is not well understood.
BACKGROUND
We review, for a general audience, different types of ML models and their applicability within healthcare. We explain the outputs from training a ML model and how trained ML models can be vulnerable to external attacks to discover personal data encoded within the model.
RISKS
We present the challenges for disclosure control of trained ML models in the context of training and exporting models from TREs. We provide insights and analyse methods that could be introduced within TREs to mitigate the risk of privacy breaches when disclosing trained models.
DISCUSSION
Although specific guidelines and policies exist for statistical disclosure controls in TREs, they do not satisfactorily address these new types of output requests; i.e., trained ML models. There is significant potential for new interdisciplinary research opportunities in developing and adapting policies and tools for safely disclosing ML outputs from TREs.
PubMed: 37123891
DOI: 10.1016/j.heliyon.2023.e15143 -
Sensors (Basel, Switzerland) Apr 2023Health equipment are used to keep track of significant health indicators, automate health interventions, and analyze health indicators. People have begun using mobile...
Health equipment are used to keep track of significant health indicators, automate health interventions, and analyze health indicators. People have begun using mobile applications to track health characteristics and medical demands because devices are now linked to high-speed internet and mobile phones. Such a combination of smart devices, the internet, and mobile applications expands the usage of remote health monitoring through the Internet of Medical Things (IoMT). The accessibility and unpredictable aspects of IoMT create massive security and confidentiality threats in IoMT systems. In this paper, Octopus and Physically Unclonable Functions (PUFs) are used to provide privacy to the healthcare device by masking the data, and machine learning (ML) techniques are used to retrieve the health data back and reduce security breaches on networks. This technique has exhibited 99.45% accuracy, which proves that this technique could be used to secure health data with masking.
Topics: Humans; Animals; Octopodiformes; Data Anonymization; Seafood; Cell Phone; Machine Learning
PubMed: 37112425
DOI: 10.3390/s23084082 -
Sensors (Basel, Switzerland) Mar 2023The advent of Artificial Intelligence (AI) and the Internet of Things (IoT) have recently created previously unimaginable opportunities for boosting clinical and patient...
The advent of Artificial Intelligence (AI) and the Internet of Things (IoT) have recently created previously unimaginable opportunities for boosting clinical and patient services, reducing costs and improving community health. Yet, a fundamental challenge that the modern healthcare management system faces is storing and securely transferring data. Therefore, this research proposes a novel Lionized remora optimization-based serpent (LRO-S) encryption method to encrypt sensitive data and reduce privacy breaches and cyber-attacks from unauthorized users and hackers. The LRO-S method is the combination of hybrid metaheuristic optimization and improved security algorithm. The fitness functions of lion and remora are combined to create a new algorithm for security key generation, which is provided to the serpent encryption algorithm. The LRO-S technique encrypts sensitive patient data before storing it in the cloud. The primary goal of this study is to improve the safety and adaptability of medical professionals' access to cloud-based patient-sensitive data more securely. The experiment's findings suggest that the secret keys generated are sufficiently random and one of a kind to provide adequate protection for the data stored in modern healthcare management systems. The proposed method minimizes the time needed to encrypt and decrypt data and improves privacy standards. This study found that the suggested technique outperformed previous techniques in terms of reducing execution time and is cost-effective.
Topics: Humans; Artificial Intelligence; Computer Security; Algorithms; Privacy; Delivery of Health Care
PubMed: 37050672
DOI: 10.3390/s23073612 -
IEEE Journal of Biomedical and Health... Dec 2022Early detection of COVID-19 is an ongoing area of research that can help with triage, monitoring and general health assessment of potential patients and may reduce...
Early detection of COVID-19 is an ongoing area of research that can help with triage, monitoring and general health assessment of potential patients and may reduce operational strain on hospitals that cope with the coronavirus pandemic. Different machine learning techniques have been used in the literature to detect potential cases of coronavirus using routine clinical data (blood tests, and vital signs measurements). Data breaches and information leakage when using these models can bring reputational damage and cause legal issues for hospitals. In spite of this, protecting healthcare models against leakage of potentially sensitive information is an understudied research area. In this study, two machine learning techniques that aim to predict a patient's COVID-19 status are examined. Using adversarial training, robust deep learning architectures are explored with the aim to protect attributes related to demographic information about the patients. The two models examined in this work are intended to preserve sensitive information against adversarial attacks and information leakage. In a series of experiments using datasets from the Oxford University Hospitals (OUH), Bedfordshire Hospitals NHS Foundation Trust (BH), University Hospitals Birmingham NHS Foundation Trust (UHB), and Portsmouth Hospitals University NHS Trust (PUH), two neural networks are trained and evaluated. These networks predict PCR test results using information from basic laboratory blood tests, and vital signs collected from a patient upon arrival to the hospital. The level of privacy each one of the models can provide is assessed and the efficacy and robustness of the proposed architectures are compared with a relevant baseline. One of the main contributions in this work is the particular focus on the development of effective COVID-19 detection models with built-in mechanisms in order to selectively protect sensitive attributes against adversarial attacks. The results on hold-out test set and external validation confirmed that there was no impact on the generalisibility of the model using adversarial learning.
PubMed: 37015447
DOI: 10.1109/JBHI.2022.3230663 -
Frontiers in Public Health 2023Digital health data collection is vital for healthcare and medical research. But it contains sensitive information about patients, which makes it challenging. To collect...
Digital health data collection is vital for healthcare and medical research. But it contains sensitive information about patients, which makes it challenging. To collect health data without privacy breaches, it must be secured between the data owner and the collector. Existing data collection research studies have too stringent assumptions such as using a third-party anonymizer or a private channel amid the data owner and the collector. These studies are more susceptible to privacy attacks due to third-party involvement, which makes them less applicable for privacy-preserving healthcare data collection. This article proposes a novel privacy-preserving data collection protocol that anonymizes healthcare data without using a third-party anonymizer or a private channel for data transmission. A clustering-based -anonymity model was adopted to efficiently prevent identity disclosure attacks, and the communication between the data owner and the collector is restricted to some elected representatives of each equivalent group of data owners. We also identified a privacy attack, known as "leader collusion", in which the elected representatives may collaborate to violate an individual's privacy. We propose solutions for such collisions and sensitive attribute protection. A greedy heuristic method is devised to efficiently handle the data owners who join or depart the anonymization process dynamically. Furthermore, we present the potential privacy attacks on the proposed protocol and theoretical analysis. Extensive experiments are conducted in real-world datasets, and the results suggest that our solution outperforms the state-of-the-art techniques in terms of privacy protection and computational complexity.
Topics: Humans; Privacy; Disclosure; Data Collection; Biomedical Research; Cluster Analysis
PubMed: 36935661
DOI: 10.3389/fpubh.2023.1125011 -
Sensors (Basel, Switzerland) Mar 2023The overwhelming popularity of technology-based solutions and innovations to address day-to-day processes has significantly contributed to the emergence of smart cities....
The overwhelming popularity of technology-based solutions and innovations to address day-to-day processes has significantly contributed to the emergence of smart cities. where millions of interconnected devices and sensors generate and share huge volumes of data. The easy and high availability of rich personal and public data generated in these digitalized and automated ecosystems renders smart cities vulnerable to intrinsic and extrinsic security breaches. Today, with fast-developing technologies, the classical username and password approaches are no longer adequate to secure valuable data and information from cyberattacks. Multi-factor authentication (MFA) can provide an effective solution to minimize the security challenges associated with legacy single-factor authentication systems (both online and offline). This paper identifies and discusses the role and need of MFA for securing the smart city ecosystem. The paper begins by describing the notion of smart cities and the associated security threats and privacy issues. The paper further provides a detailed description of how MFA can be used for securing various smart city entities and services. A new concept of blockchain-based multi-factor authentication named "BAuth-ZKP" for securing smart city transactions is presented in the paper. The concept focuses on developing smart contracts between the participating entities within the smart city and performing the transactions with zero knowledge proof (ZKP)-based authentication in a secure and privacy-preserved manner. Finally, the future prospects, developments, and scope of using MFA in smart city ecosystem are discussed.
PubMed: 36904955
DOI: 10.3390/s23052757