In this Newsletter
Regulations
Impact Of the EU-U.S. Data Privacy Framework For Health Data And Clinical Trials
The European Commission has adopted an adequacy decision for EU-U.S. data transfers. This decision includes the EU-US. Data Privacy Framework (DPF) which regulates data transfers between the U.S. and the EU. Special provisions were created for health data and in particular for clinical trials. Provisions from the DPF include the need for information to be provided to patients in case of withdrawal from clinical trials as well as information in relation to express consent. Additionnally, the DPF underlines the obligation for sponsors to inform patients of any data transfers.
DPF Self-Certification Process
The EU-U.S. Data Privacy Framework (DPF) requires U.S.-based organizations to self-certify their adherence to the DPF principles. U.S.-based organization submit their self-certification to the U.S. Department of Commerce’s International Trade Administration (ITA). The self-certification process requires organizations to go through a number of steps such as the drafting of a privacy policy with explicit mentions, the designation of a contact person, the existence of a verification mechanism... Once the certification is validated, the organizations are part of a list.
Rise of U.S. Health Data Privacy Regulations
In the wake of Washington's My Health Data Act, other U.S. States realised the need for a regulation pertaining to health data, especially in the absence of a comprehensive national privacy law. The States of Nevada, Connecticut and Illinois passed their own bills regarding health data privacy. The U.S. Federal Trade Commission (FTC) also took part in this movement with the Health Breach Notification Rule that gave birth to the GoodRx, BetterHelp and Prenom enforcement actions.
The European Commission Reinforces The GDPR In Cross-Border Cases
The European Commission proposed a law to create procedural rules for the authorities enforcing the GDPR. This will lead to legal certainty for businesses and quicker remedies for individuals. The law will harmonise the procedural rules between Member States data protection authorities, facilitate cooperation and dispute resolution with deadlines for cross-border cooperation and dispute resolution.
PETs - Privacy Enhancing Technologies
New MIT Privacy Technique For LLMs
Large language models (LLMs) in the clinical domain use considerable amounts of sensitive data that could be extracted by malicious agents. The Massachusetts Institute for Technology (MIT) developped a privacy technique that would allow LLMs to remain accurate while protecting sensitive data. The MIT developped the "Probably Approximately Correct" (PAC) Privacy Metric that creates enough noise, enough randomness in the data, in order for sensitive data to remain protected.
Synthetic Data As Privacy Enhancing Strategy
UK's Data Protection Authority, ICO, published a study paper intituled 'Exploring Synthetic Data Validation – Privacy, Utility and Fidelity' in cooperation with the Financial Conduct Authority (FCA) and the Alan Turing Institute. This PET uses a mathematical model or algorithm to generate statistically realistic, but ‘artificial’ data. This would help businesses mitigate data quality issues, model new and emerging scenarios, and protect commercially sensitive data. In healthcare, most notably clinical trials, this technique would alleviate privacy risks concerning sensitive data.
Confidential Computing : A Solution For The Application Of Privacy Principles In AI?
Generative AI uses many datasets and consequently data governance issues may come up. Confidential computing delivers proof of processing, providing hard evidence of a model’s authenticity and integrity. Confidential computing ensures code integrity, encrypts data, and data and IP are completely isolated from infrastructure owners and made only accessible to trusted applications running on trusted CPUs. Data and code are protected and isolated by hardware controls, all processing happens privately in the processor without the possibility of data leakage.
Artificial Intelligence
Google AI's For Medicine: Accuracy Over 90% For Clinical Answers
Med-PaLM, Google AI's for medicine, shows promising results concerning answers in the clinical domain. These past few months the model was opened up to a select group of healthcare and life science organizations for testing. The model still carries inherent risks linked with LLMs such as misinformation or bias. In the face of rising risks, harnessing massive amounts of data becomes an essential strategy. However, the healthcare sector finds itself entangled in the ensuing tug-of-war between data-driven solutions and the looming spectre of privacy breaches.
LLM, Health and Regulations
Different regulations might be needed for Large Language Models (LLM) and AI-based medical technologies as they are different. The authors argue that regulations must strive to ensure safety, to maintain ethical standards, and to protect patient privacy without stiffling innovation. They propose the implementation of a framework that guarantees privacy while using a large quantity of data to train the models. Regulations should be adapted to this new technology and take into account its specificities. The FDA already regulates some aspects of AI-based medical technologies.
Click to read more
Cybersecurity
U.S. Health Department breached by MOVEit Hacking Campaign
The U.S. Health Department experienced a data breach that could affect 15 million people, a number inferred by a data analyst. This breach is the result of an attack of hackers through a vulnerability of the file-transfer software called MOVEit. Files from companies and organizations that had been uploaded to MOVEit are concerned. Among the impacted organizations were public institutions such as the Department of Education from New York City and Louisiana Office of Motor Vehicles which could impact all persons with a state-issued licence.
HCA Healthcare Data Breach Impacted 11 million People
HCA Healthcare, a healthcare organization of 180 hospitals and 2,300 ambulatory sites of care in 20 states and the United Kingdom, suffered a data breach impacting 11 million people. The data breach was located on an external storage location used exclusively to automate the formatting of email messages. The breach concerns information used for email messages, personal data such as patient names, cities, states, zip codes, email addresses, phone numbers, gender, dates of birth, and appointment information.
Data Privacy Enforcement
FTC Bans BetterHelp From Sharing Sensitive Health Data For Advertising
The Federal Trade Commission (FTC) launched an enforcement action against BetterHelp in March 2023. The FTC has finalized its order requiring BetterHelp to pay a $7.8 million fine and prohibiting it from sharing consumers’ health data for advertising. The order also requires the company to respect privacy principles. The company shared personal information from its users to Facebook, Snapchat, Criteo, and Pinterest for advertising purposes despite promising consumers that it would only use or disclose personal health data for limited purposes.
GoodRx, Google, Meta and Criteo Seek Dismissal Of A Class-Action Privacy Complaint
After the enforcement action from the FTC against GoodRx, a class-action privacy complaint was filed by GoodRx users for wrongly sharing health information with third parties. GoodRx, as well as Meta, Google and Criteo are seeking the dismissal of the class-action. GoodRx argues that it informed consumers about data sharing, and that people have consented to share their information in exchange for coupons. The users argue that for a period of time, GoodRx indicated in its policies that it would never provide third parties with users' personal health information.
Sign up for our newsletter
We like to keep our readers up to date on complex regulatory issues, the latest industry trends and updated guidelines to help you to solve a problem or make an informed decision.
Newsletter #19
In October, key developments in data privacy, AI, and cybersecurity emerged, including new GDPR accountability guidance for controllers, the introduction of the UK’s Data Bill 2024, and the FDA's call for coordinated AI regulation in healthcare. High-profile data breaches also highlighted vulnerabilities in health data, underscoring the need for stronger, globally aligned privacy standards.
Newsletter #18
Get up to speed with the latest in data protection regulations and healthtech innovations, including updates from Brazil, the UK, and California, along with advancements in AI-driven healthcare solutions. Plus, explore major privacy enforcement actions and key developments shaping the future of digital health.
Newsletter #17
August was a busy month for data protection in the life sciences—here's your summer recap!