-
Journal of Law and Medicine Dec 2023People with (a history of) hepatitis C have concerns about privacy and the confidentiality of their health information. This is often due to the association between...
People with (a history of) hepatitis C have concerns about privacy and the confidentiality of their health information. This is often due to the association between hepatitis C and injecting drug use and related stigma. In Australia, recent data breaches at a major private health insurer and legislative reforms to increase access to electronic health records have heightened these concerns. Drawing from interviews with people with lived experience of hepatitis C and stakeholders working in this area, this article explores the experiences and concerns of people with (a history of) hepatitis C in relation to the sharing of their health records. It considers the potential application of health privacy principles in the context of hepatitis C and argues for the development of guidelines concerning the privacy of health records held by health departments and public hospitals. Such principles might also inform reforms to legislation regarding access to health records.
Topics: Humans; Privacy; Electronic Health Records; Confidentiality; Hepacivirus; Hepatitis C
PubMed: 38459877
DOI: No ID Found -
PeerJ. Computer Science 2022On January 8, 2020, the Centers for Disease Control and Prevention officially announced a new virus in Wuhan, China. The first novel coronavirus (COVID-19) case was...
BACKGROUND
On January 8, 2020, the Centers for Disease Control and Prevention officially announced a new virus in Wuhan, China. The first novel coronavirus (COVID-19) case was discovered on December 1, 2019, implying that the disease was spreading quietly and quickly in the community before reaching the rest of the world. To deal with the virus' wide spread, countries have deployed contact tracing mobile applications to control viral transmission. Such applications collect users' information and inform them if they were in contact with an individual diagnosed with COVID-19. However, these applications might have affected human rights by breaching users' privacy.
METHODOLOGY
This systematic literature review followed a comprehensive methodology to highlight current research discussing such privacy issues. First, it used a search strategy to obtain 808 relevant papers published in 2020 from well-established digital libraries. Second, inclusion/exclusion criteria and the snowballing technique were applied to produce more comprehensive results. Finally, by the application of a quality assessment procedure, 40 studies were chosen.
RESULTS
This review highlights privacy issues, discusses centralized and decentralized models and the different technologies affecting users' privacy, and identifies solutions to improve data privacy from three perspectives: public, law, and health considerations.
CONCLUSIONS
Governments need to address the privacy issues related to contact tracing apps. This can be done through enforcing special policies to guarantee users privacy. Additionally, it is important to be transparent and let users know what data is being collected and how it is being used.
PubMed: 35111915
DOI: 10.7717/peerj-cs.826 -
The Journal of Law, Medicine & Ethics :... Mar 2020This article focuses on state privacy, security, and data breach regulation of mobile-app mediated health research, concentrating in particular on research studies...
This article focuses on state privacy, security, and data breach regulation of mobile-app mediated health research, concentrating in particular on research studies conducted or participated in by independent scientists, citizen scientists, and patient researchers. Prior scholarship addressing these issues tends to focus on the lack of application of the HIPAA Privacy and Security Rules and other sources of federal regulation. One article, however, mentions state law as a possible source of privacy and security protections for individuals in the particular context of mobile app-mediated health research. This Article builds on this prior scholarship by: (1) assessing state data protection statutes that are potentially applicable to mobile app-mediated health researchers; and (2) suggesting statutory amendments that could better protect the privacy and security of mobile health research data. As discussed in more detail below, all fifty states and the District of Columbia have potentially applicable data breach notification statutes that require the notification of data subjects of certain informational breaches in certain contexts. In addition, more than two-thirds of jurisdictions have potentially applicable data security statutes and almost one-third of jurisdictions have potentially applicable data privacy statutes. Because all jurisdictions have data breach notification statutes, these statutes will be assessed first.
Topics: Citizen Science; Computer Security; Confidentiality; Government Regulation; Humans; Mandatory Reporting; Mobile Applications; Research; Research Personnel; State Government; United States
PubMed: 32342742
DOI: 10.1177/1073110520917033 -
Plastic and Reconstructive Surgery Jul 2024Plastic surgery offices are subject to a wide variety of cybersecurity threats, including ransomware attacks that encrypt the plastic surgeon's information and make it...
Plastic surgery offices are subject to a wide variety of cybersecurity threats, including ransomware attacks that encrypt the plastic surgeon's information and make it unusable, as well as data theft and disclosure attacks that threaten to disclose confidential patient information. Cloud-based office systems increase the attack surface and do not mitigate the effects of breaches that can result in theft of credentials. Although employee education is often recommended to avoid the threats, a single error by a single employee has often led to security breaches, and it is unreasonable to expect that no employee will ever make an error. Recognition of the 2 most common vectors of these breaches-compromised email attachments and surfing to compromised websites-allows the use of technical networking tools to prevent both email attachments from being received and employee use of unsanctioned and potentially compromised websites. Furthermore, once compromised code has been allowed to run within the office network, that code must necessarily make outbound connections to exploit the breach. Preventing that outbound traffic can mitigate the effects of a breach. However, most small office network consultants design firewalls to only limit incoming network traffic and fail to implement technical measures to stop the unauthorized outbound traffic that is necessary for most network attacks. The authors provide detailed techniques that can be used to direct information technology consultants to properly limit outbound network traffic as well as incoming email attachments.
Topics: Computer Security; Humans; Confidentiality; Surgery, Plastic; Electronic Mail
PubMed: 37220229
DOI: 10.1097/PRS.0000000000010740 -
Cureus Aug 2023The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive... (Review)
Review
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
PubMed: 37692617
DOI: 10.7759/cureus.43262 -
Interactive Journal of Medical Research May 2021Patient data have conventionally been thought to be well protected by the privacy laws outlined in the United States. The increasing interest of for-profit companies in...
Patient data have conventionally been thought to be well protected by the privacy laws outlined in the United States. The increasing interest of for-profit companies in acquiring the databases of large health care systems poses new challenges to the protection of patients' privacy. It also raises ethical concerns of sharing patient data with entities that may exploit it for commercial interests and even target vulnerable populations. Recognizing that every breach in the confidentiality of large databases exposes millions of patients to the potential of being exploited is important in framing new rules for governing the sharing of patient data. Similarly, the ethical aspects of data voluntarily and altruistically provided by patients for research, which may be exploited for commercial interests due to patient data sharing between health care entities and third-party companies, need to be addressed. The rise of technologies such as artificial intelligence and the availability of personal data gleaned by data vendor companies place American patients at risk of being exploited both intentionally and inadvertently because of the sharing of their data by their health care provider institutions and third-party entities.
PubMed: 34018968
DOI: 10.2196/22269 -
BMC Medical Ethics Sep 2021Advances in healthcare artificial intelligence (AI) are occurring rapidly and there is a growing discussion about managing its development. Many AI technologies end up...
BACKGROUND
Advances in healthcare artificial intelligence (AI) are occurring rapidly and there is a growing discussion about managing its development. Many AI technologies end up owned and controlled by private entities. The nature of the implementation of AI could mean such corporations, clinics and public bodies will have a greater than typical role in obtaining, utilizing and protecting patient health information. This raises privacy issues relating to implementation and data security.
MAIN BODY
The first set of concerns includes access, use and control of patient data in private hands. Some recent public-private partnerships for implementing AI have resulted in poor protection of privacy. As such, there have been calls for greater systemic oversight of big data health research. Appropriate safeguards must be in place to maintain privacy and patient agency. Private custodians of data can be impacted by competing goals and should be structurally encouraged to ensure data protection and to deter alternative use thereof. Another set of concerns relates to the external risk of privacy breaches through AI-driven methods. The ability to deidentify or anonymize patient health data may be compromised or even nullified in light of new algorithms that have successfully reidentified such data. This could increase the risk to patient data under private custodianship.
CONCLUSIONS
We are currently in a familiar situation in which regulation and oversight risk falling behind the technologies they govern. Regulation should emphasize patient agency and consent, and should encourage increasingly sophisticated methods of data anonymization and protection.
Topics: Artificial Intelligence; Big Data; Computer Security; Data Anonymization; Humans; Privacy
PubMed: 34525993
DOI: 10.1186/s12910-021-00687-3 -
The American Journal of Managed Care Dec 2020The study's objectives were to explore the impact of personal/organizational knowledge, prior breach status of organizations, and framed scenarios on the choices made by...
OBJECTIVES
The study's objectives were to explore the impact of personal/organizational knowledge, prior breach status of organizations, and framed scenarios on the choices made by privacy officers regarding the decision to report a breach.
STUDY DESIGN
A survey was completed of 123 privacy officers who are members of the American Health Information Management Association (AHIMA).
METHODS
The study used primary data collection through a survey. Individuals listed as privacy officers within the AHIMA were the target audience for the survey. Descriptive statistics, logistic regression, and predicted probabilities were used to analyze the data collected.
RESULTS
The percentage of privacy officers who chose to report a breach to the Office for Civil Rights varied by scenario: scenario 1 (general with little information), 39%; scenario 2 (4-factor risk assessment, paper records), 73.2%; scenario 3 (4-factor risk assessment, ransomware case), 91.9%. Several factors affected the response to each scenario. In scenario 1, privacy officers with a Certified in Healthcare Privacy and Security (CHPS) credential were less likely to report; those who previously reported a prior breach were more likely to report. In scenario 2, privacy officers with a bachelor's degree or graduate education were less likely to report; those who held the CHPS or coding credential were less likely to report.
CONCLUSIONS
Study findings show there are gray areas where privacy officers make their own decisions, and there is a difference in the types of decisions they are making on a day-to-day basis. Future guidance and policies need to address these gaps and can use the insight provided by the results of this study.
Topics: Computer Security; Confidentiality; Data Collection; Delivery of Health Care; Humans; Privacy
PubMed: 33315333
DOI: 10.37765/ajmc.2020.88546 -
Journal of the Korean Society of... Jul 2022The importance of ethics in research and the use of artificial intelligence (AI) is increasingly recognized not only in the field of healthcare but throughout society.... (Review)
Review
The importance of ethics in research and the use of artificial intelligence (AI) is increasingly recognized not only in the field of healthcare but throughout society. This article intends to provide domestic readers with practical points regarding the ethical issues of using radiological images for AI research, focusing on data security and privacy protection and the right to data. Therefore, this article refers to related domestic laws and government policies. Data security and privacy protection is a key ethical principle for AI, in which proper de-identification of data is crucial. Sharing healthcare data to develop AI in a way that minimizes business interests is another ethical point to be highlighted. The need for data sharing makes the data security and privacy protection even more important as data sharing increases the risk of data breach.
PubMed: 36238915
DOI: 10.3348/jksr.2022.0036 -
Journal of Personalized Medicine Aug 2022While the rapid growth of mobile mental health applications has offered an avenue of support unbridled by physical distance, time, and cost, the digitalization of... (Review)
Review
While the rapid growth of mobile mental health applications has offered an avenue of support unbridled by physical distance, time, and cost, the digitalization of traditional interventions has also triggered doubts surrounding their effectiveness and safety. Given the need for a more comprehensive and up-to-date understanding of mobile mental health apps in traditional treatment, this umbrella review provides a holistic summary of their key potential and pitfalls. A total of 36 reviews published between 2014 and 2022-including systematic reviews, meta-analyses, scoping reviews, and literature reviews-were identified from the Cochrane library, Medline (via PubMed Central), and Scopus databases. The majority of results supported the key potential of apps in helping to (1) provide timely support, (2) ease the costs of mental healthcare, (3) combat stigma in help-seeking, and (4) enhance therapeutic outcomes. Our results also identified common themes of apps' pitfalls (i.e., challenges faced by app users), including (1) user engagement issues, (2) safety issues in emergencies, (3) privacy and confidentiality breaches, and (4) the utilization of non-evidence-based approaches. We synthesize the potential and pitfalls of mental health apps provided by the reviews and outline critical avenues for future research.
PubMed: 36143161
DOI: 10.3390/jpm12091376