-
Revista Da Associacao Medica Brasileira... Feb 2022The objective of this study was to emphasize the importance of legal and bioethical knowledge in maintaining medical confidentiality, especially in situations when there... (Review)
Review
OBJECTIVE
The objective of this study was to emphasize the importance of legal and bioethical knowledge in maintaining medical confidentiality, especially in situations when there is a diagnosis of HIV infection.
METHODS
A literature review of studies published in the Scientific Electronic Library Online and National Library of Medicine databases was performed. Sixteen studies available in full, online, and free, published between 2010 and 2020, were selected.
RESULTS
The studies highlighted that, despite the ethical duty to breach confidentiality for the protection of third parties, many doctors are reluctant to reveal this secret due to the power of stigmatization and social discrimination related to the diagnosis of HIV infection, which affects integrity, counseling, and capability to treat patients.
CONCLUSION
HIV diagnosis implies bioethical and legal questions. Respect for medical confidentiality is a matter to be discussed, as there is a need to protect the privacy of the patient, at the same time the responsibility to preserve the health of others.
Topics: Confidentiality; Disclosure; HIV Infections; Humans
PubMed: 35239882
DOI: 10.1590/1806-9282.20211043 -
International Journal of... 2019Telehealth is a great approach for providing high quality health care services to people who cannot easily access these services in person. However, because of...
BACKGROUND
Telehealth is a great approach for providing high quality health care services to people who cannot easily access these services in person. However, because of frequently reported health data breaches, many people may hesitate to use telehealth-based health care services. It is necessary for telehealth care providers to demonstrate that they have taken sufficient actions to protect their patients' data security and privacy. The government provided a HIPAA audit protocol that is highly useful for internal security and privacy auditing on health care systems, however, this protocol includes extensive details that are not always specific to telehealth and therefore is difficult to be used by telehealth practitioners.
OBJECTIVE
The goal of this study was to develop and validate a telehealth privacy and security self-assessment questionnaire for telehealth providers.
METHODS
In our previous work, we performed a systematic review on the security and privacy protection offered in various telehealth systems. The results from this systematic review and the HIPAA audit protocol were used to guide the development of the self-assessment questionnaire. The draft of the questionnaire was created by the research team and distributed to a group of telehealth providers for evaluating the relevance and clarity of each statement in the draft. The questionnaire was adjusted and finalized according to the collected feedback and face-to-face discussions by the research team. A website was created to distribute the questionnaire and manage the answers from study participants. A psychometric analysis was performed to evaluate the reliability of the questionnaire.
RESULTS
There were 84 statements in the draft questionnaire. Five telehealth providers provided their feedback to the statements in this draft. They indicated that a number of these statements were either redundant or beyond the capacity of telehealth care practitioners, who typically do not have formal training in information security. They also pointed out that the wording of some statements needed to be adjusted. The final released version of the questionnaire had 49 statements. In total, 31 telehealth providers across the nation participated in the study by answering all the statements in this questionnaire. The psychometric analysis indicated that the reliability of this questionnaire was high.
CONCLUSION
With the availability of this self-assessment questionnaire, telehealth providers can perform a quick self-assessment on their telehealth systems. The assessment results may be used to identify possible vulnerabilities in telehealth systems and practice or demonstrate to patients the sufficient security and privacy protection to patients' data.
PubMed: 31341542
DOI: 10.5195/ijt.2019.6276 -
The Journal of Law, Medicine & Ethics :... Mar 2020This article focuses on state privacy, security, and data breach regulation of mobile-app mediated health research, concentrating in particular on research studies...
This article focuses on state privacy, security, and data breach regulation of mobile-app mediated health research, concentrating in particular on research studies conducted or participated in by independent scientists, citizen scientists, and patient researchers. Prior scholarship addressing these issues tends to focus on the lack of application of the HIPAA Privacy and Security Rules and other sources of federal regulation. One article, however, mentions state law as a possible source of privacy and security protections for individuals in the particular context of mobile app-mediated health research. This Article builds on this prior scholarship by: (1) assessing state data protection statutes that are potentially applicable to mobile app-mediated health researchers; and (2) suggesting statutory amendments that could better protect the privacy and security of mobile health research data. As discussed in more detail below, all fifty states and the District of Columbia have potentially applicable data breach notification statutes that require the notification of data subjects of certain informational breaches in certain contexts. In addition, more than two-thirds of jurisdictions have potentially applicable data security statutes and almost one-third of jurisdictions have potentially applicable data privacy statutes. Because all jurisdictions have data breach notification statutes, these statutes will be assessed first.
Topics: Citizen Science; Computer Security; Confidentiality; Government Regulation; Humans; Mandatory Reporting; Mobile Applications; Research; Research Personnel; State Government; United States
PubMed: 32342742
DOI: 10.1177/1073110520917033 -
Interactive Journal of Medical Research May 2021Patient data have conventionally been thought to be well protected by the privacy laws outlined in the United States. The increasing interest of for-profit companies in...
Patient data have conventionally been thought to be well protected by the privacy laws outlined in the United States. The increasing interest of for-profit companies in acquiring the databases of large health care systems poses new challenges to the protection of patients' privacy. It also raises ethical concerns of sharing patient data with entities that may exploit it for commercial interests and even target vulnerable populations. Recognizing that every breach in the confidentiality of large databases exposes millions of patients to the potential of being exploited is important in framing new rules for governing the sharing of patient data. Similarly, the ethical aspects of data voluntarily and altruistically provided by patients for research, which may be exploited for commercial interests due to patient data sharing between health care entities and third-party companies, need to be addressed. The rise of technologies such as artificial intelligence and the availability of personal data gleaned by data vendor companies place American patients at risk of being exploited both intentionally and inadvertently because of the sharing of their data by their health care provider institutions and third-party entities.
PubMed: 34018968
DOI: 10.2196/22269 -
PeerJ. Computer Science 2022On January 8, 2020, the Centers for Disease Control and Prevention officially announced a new virus in Wuhan, China. The first novel coronavirus (COVID-19) case was...
BACKGROUND
On January 8, 2020, the Centers for Disease Control and Prevention officially announced a new virus in Wuhan, China. The first novel coronavirus (COVID-19) case was discovered on December 1, 2019, implying that the disease was spreading quietly and quickly in the community before reaching the rest of the world. To deal with the virus' wide spread, countries have deployed contact tracing mobile applications to control viral transmission. Such applications collect users' information and inform them if they were in contact with an individual diagnosed with COVID-19. However, these applications might have affected human rights by breaching users' privacy.
METHODOLOGY
This systematic literature review followed a comprehensive methodology to highlight current research discussing such privacy issues. First, it used a search strategy to obtain 808 relevant papers published in 2020 from well-established digital libraries. Second, inclusion/exclusion criteria and the snowballing technique were applied to produce more comprehensive results. Finally, by the application of a quality assessment procedure, 40 studies were chosen.
RESULTS
This review highlights privacy issues, discusses centralized and decentralized models and the different technologies affecting users' privacy, and identifies solutions to improve data privacy from three perspectives: public, law, and health considerations.
CONCLUSIONS
Governments need to address the privacy issues related to contact tracing apps. This can be done through enforcing special policies to guarantee users privacy. Additionally, it is important to be transparent and let users know what data is being collected and how it is being used.
PubMed: 35111915
DOI: 10.7717/peerj-cs.826 -
BMC Medical Ethics Sep 2021Advances in healthcare artificial intelligence (AI) are occurring rapidly and there is a growing discussion about managing its development. Many AI technologies end up...
BACKGROUND
Advances in healthcare artificial intelligence (AI) are occurring rapidly and there is a growing discussion about managing its development. Many AI technologies end up owned and controlled by private entities. The nature of the implementation of AI could mean such corporations, clinics and public bodies will have a greater than typical role in obtaining, utilizing and protecting patient health information. This raises privacy issues relating to implementation and data security.
MAIN BODY
The first set of concerns includes access, use and control of patient data in private hands. Some recent public-private partnerships for implementing AI have resulted in poor protection of privacy. As such, there have been calls for greater systemic oversight of big data health research. Appropriate safeguards must be in place to maintain privacy and patient agency. Private custodians of data can be impacted by competing goals and should be structurally encouraged to ensure data protection and to deter alternative use thereof. Another set of concerns relates to the external risk of privacy breaches through AI-driven methods. The ability to deidentify or anonymize patient health data may be compromised or even nullified in light of new algorithms that have successfully reidentified such data. This could increase the risk to patient data under private custodianship.
CONCLUSIONS
We are currently in a familiar situation in which regulation and oversight risk falling behind the technologies they govern. Regulation should emphasize patient agency and consent, and should encourage increasingly sophisticated methods of data anonymization and protection.
Topics: Artificial Intelligence; Big Data; Computer Security; Data Anonymization; Humans; Privacy
PubMed: 34525993
DOI: 10.1186/s12910-021-00687-3 -
The American Journal of Managed Care Dec 2020The study's objectives were to explore the impact of personal/organizational knowledge, prior breach status of organizations, and framed scenarios on the choices made by...
OBJECTIVES
The study's objectives were to explore the impact of personal/organizational knowledge, prior breach status of organizations, and framed scenarios on the choices made by privacy officers regarding the decision to report a breach.
STUDY DESIGN
A survey was completed of 123 privacy officers who are members of the American Health Information Management Association (AHIMA).
METHODS
The study used primary data collection through a survey. Individuals listed as privacy officers within the AHIMA were the target audience for the survey. Descriptive statistics, logistic regression, and predicted probabilities were used to analyze the data collected.
RESULTS
The percentage of privacy officers who chose to report a breach to the Office for Civil Rights varied by scenario: scenario 1 (general with little information), 39%; scenario 2 (4-factor risk assessment, paper records), 73.2%; scenario 3 (4-factor risk assessment, ransomware case), 91.9%. Several factors affected the response to each scenario. In scenario 1, privacy officers with a Certified in Healthcare Privacy and Security (CHPS) credential were less likely to report; those who previously reported a prior breach were more likely to report. In scenario 2, privacy officers with a bachelor's degree or graduate education were less likely to report; those who held the CHPS or coding credential were less likely to report.
CONCLUSIONS
Study findings show there are gray areas where privacy officers make their own decisions, and there is a difference in the types of decisions they are making on a day-to-day basis. Future guidance and policies need to address these gaps and can use the insight provided by the results of this study.
Topics: Computer Security; Confidentiality; Data Collection; Delivery of Health Care; Humans; Privacy
PubMed: 33315333
DOI: 10.37765/ajmc.2020.88546 -
Journal of the Korean Society of... Jul 2022The importance of ethics in research and the use of artificial intelligence (AI) is increasingly recognized not only in the field of healthcare but throughout society.... (Review)
Review
The importance of ethics in research and the use of artificial intelligence (AI) is increasingly recognized not only in the field of healthcare but throughout society. This article intends to provide domestic readers with practical points regarding the ethical issues of using radiological images for AI research, focusing on data security and privacy protection and the right to data. Therefore, this article refers to related domestic laws and government policies. Data security and privacy protection is a key ethical principle for AI, in which proper de-identification of data is crucial. Sharing healthcare data to develop AI in a way that minimizes business interests is another ethical point to be highlighted. The need for data sharing makes the data security and privacy protection even more important as data sharing increases the risk of data breach.
PubMed: 36238915
DOI: 10.3348/jksr.2022.0036 -
Journal of Personalized Medicine Aug 2022While the rapid growth of mobile mental health applications has offered an avenue of support unbridled by physical distance, time, and cost, the digitalization of... (Review)
Review
While the rapid growth of mobile mental health applications has offered an avenue of support unbridled by physical distance, time, and cost, the digitalization of traditional interventions has also triggered doubts surrounding their effectiveness and safety. Given the need for a more comprehensive and up-to-date understanding of mobile mental health apps in traditional treatment, this umbrella review provides a holistic summary of their key potential and pitfalls. A total of 36 reviews published between 2014 and 2022-including systematic reviews, meta-analyses, scoping reviews, and literature reviews-were identified from the Cochrane library, Medline (via PubMed Central), and Scopus databases. The majority of results supported the key potential of apps in helping to (1) provide timely support, (2) ease the costs of mental healthcare, (3) combat stigma in help-seeking, and (4) enhance therapeutic outcomes. Our results also identified common themes of apps' pitfalls (i.e., challenges faced by app users), including (1) user engagement issues, (2) safety issues in emergencies, (3) privacy and confidentiality breaches, and (4) the utilization of non-evidence-based approaches. We synthesize the potential and pitfalls of mental health apps provided by the reviews and outline critical avenues for future research.
PubMed: 36143161
DOI: 10.3390/jpm12091376