The European Union is no stranger to formulating comprehensive regulations, especially within the tech sector. From guidelines on data protection and social media to the most recent inclusions regarding artificial intelligence, the EU continues to lead regulatory innovations.
But what implications do these rules hold for the expanding life sciences and healthcare sectors in Europe?
Data Protection and Privacy
European Union’s emphasis on data protection is reshaping the way businesses handle information. David Kirton, a leading expert from the technology group at William Fry, mentions that EU legislations like the Data Act, Data Governance Act, and Health Data Space Regulation profoundly reshape existing data protection norms and are of utmost relevance for life science and healthcare businesses.
According to Kirton, the Data Act strengthens GDPR’s data portability rights, making it more straightforward for patients to switch healthcare providers, especially when data is collected via smart devices and wearables. The Data Governance Act emphasizes both personal and non-personal data and mandates public sector entities, including those in life sciences and healthcare, to share non-personal data.
“These bodies must foster data-sharing mechanisms, ensuring personal data remains GDPR-compliant. This could unlock enormous potential for private businesses and researchers, offering them unprecedented access to healthcare data, pivotal for research, product creation, and service delivery.”
Furthermore, the impending e-privacy rules will bolster existing data protection norms, particularly pertaining to electronic communication methods like direct marketing. Businesses must align their electronic communication means with these new rules.
High-Risk AI Systems
June witnessed the EU Parliament overwhelmingly endorsing the AI Act, which aims to restrict hazardous technology and supervise AI applications. This act intends to introduce a consistent, adaptable definition for AI. Roberta Metsola, the EU Parliament president, voiced:
“We are entering an era of rigorous scrutiny with AI. Our legislative approach needs a rethink considering the ubiquitous AI access.”
Life sciences and healthcare entities deploying AI must be more vigilant.
“Organizations leveraging high-risk AI systems have to formulate, execute, document, and upkeep a risk management system pertinent to that AI,” Kirton elucidates.
He emphasizes the meticulous ‘quality’ criteria set for the ‘training data’ used in AI system development.
Kirton cites an example:
“A healthcare establishment utilizing AI for patient triage based on specific symptoms should guarantee the data for training and testing the AI accurately represents all demographic sections.”
Transparency vs. Confidentiality
The upcoming Health Data Space Regulation by the EU, targeting deep tech companies within life sciences and healthcare, seeks to empower patients by granting them immediate, free access to their health data in an understandable, consolidated form. Kirton elucidates:
“This regulation targets ‘data holders’ within the healthcare domain who are responsible for making health data accessible following the regulation’s framework.”
There’s a fundamental tension, Kirton argues, between the transparency emphasized by the EU Digital Reforms Package and the confidentiality championed by intellectual property rights.
“A thrilling AI application in life sciences is generative AI’s role in pharmaceuticals and medical equipment creation. However, the AI Act necessitates AI system providers to publicly share a detailed summary of their copyrighted training data usage.”
The implications of this transparency will largely depend on the depth and specifics of the disclosed information.