Download the PDF
Data trends 2024
Data trends 2024
Chapter 1: What to consider when adding data to the AI revolution
By Richard Bird, Julian Boatin, Brock Dahl, Theresa Ehlen, Beth George, Adam Gillert, Giles Pratt, Katie Sa, Max Smith, Satya Staes Polet and Christoph Werkmeister
IN BRIEF
Artificial intelligence (AI) is of growing importance to businesses and in the next few years businesses are widely expected to explore opportunities presented by Generative AI (GenAI). GenAI is capable of processing and analysing large amounts of data and generating new output based on it. Many companies have access to troves of data, from which they may wish to extract additional value or efficiencies by using AI. In this article we highlight why businesses developing or implementing AI in the future should:
GenAI models are trained on large volumes of data, which may include personal data, and will also often rely on the processing of personal data as part of their operation.
In Europe, the EU’s General Data Protection Regulation (GDPR), and the UK’s GDPR, apply to the use of GenAI to the extent this includes the processing of personal data. For example:
An increasing number of countries have privacy laws that are similar to the EU’s GDPR or which impose other challenging requirements. In the US, companies must be conscious of the state data privacy laws that indirectly influence AI options. Such laws typically contain a range of requirements, such as purpose limitations, data minimisation rules, disclosure limitations, notice and consent obligations, and key sections on automated decision-making. Companies must pay particular attention to these requirements in executing their own AI strategies and designing AI systems.
Many jurisdictions are also in the process of developing laws that specifically target AI. Those laws often overlap with the requirements of privacy laws as well as other legislation (such as those governing copyright, product liability or equalities).
The rapid evolution of AI capabilities and applications, and the ever-expanding regulatory frameworks governing them, suggests the need for building adaptable compliance frameworks that can manage cross-border complexity.
Brock Dahl
Partner
The EU is seen as a leader in this regard and will set out various requirements for the use of AI in the AI Act and the AI Liability Directive. Once they enter into effect, those AI regulations may not only apply to providers but may also affect users of AI within the EU. The multitude of obligations for providers include:
Non-compliance may result in a fine of up to €40m or 7% of the total worldwide annual turnover, whichever is higher. A final draft of the AI Act is expected by the end of 2023 at the earliest, which will likely be followed by an implementation period of around 24 months.
Given the extensive time and investment required to build an AI system, it is vital that AI providers and other impacted businesses begin to consider the implications of the EU’s pending AI laws. Businesses should keep an eye on possible changes to the draft laws as they complete their legislative journeys. This is even more true given that the EU, together with tech companies, is currently working on a so-called ‘AI-Pact’ to bridge AI governance until the AI Act becomes effective.
Several other jurisdictions, including (among others) Canada, Brazil and China, have either introduced or are planning to introduce AI-specific laws.
Other countries are taking a less direct approach to AI regulation, but businesses will still need to keep abreast of emerging regulator-led initiatives, and potentially a more complex patchwork of applicable laws.
Unlike the EU, the UK is not planning to introduce any new AI-specific regulations or laws. Instead, the government has proposed a ‘pro-innovation’ framework based on five overarching principles to guide the development and use of AI: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. It is envisaged that existing regulators in the UK would be responsible for applying these five principles in practice across sectors. The idea is that the framework should be sufficiently flexible to keep pace with the fast-moving technology involved. The five overarching principles underpinning the UK AI White Paper are broadly aligned with the principles outlined in the UK and EU GDPRs.
The UK government is taking an agile and iterative approach to regulating the use and development of AI, so we advise clients to keep a watching brief of how this develops. Guidance published by UK regulators will be a key resource in the first instance to understand how they intend to apply the five principles in practice.
Maxwell Smith
Associate
Similar to the UK, the US government (at the federal level) has taken a variety of steps to signal its interest in AI issues; but neither it nor the US Congress have yet pursued legislative requirements. For now, AI applications are more typically governed less directly through the increasingly proliferating state data privacy laws.
At the US federal level, the White House has issued an Executive Order that, if fully implemented, will establish a range of regulatory requirements pertaining to AI. These will include:
We look at legal risks along the cycle of an AI use case: input, operation of the model and output. That allows us to address the risks when and where they come up and find appropriate mitigation measures.
Theresa Ehlen
Partner
As explained above, many privacy principles and requirements will be pertinent in considering the development or deployment of AI where personal data is used. Privacy and AI-specific laws are just one part of a legal jigsaw of issues which those developing or using AI should consider. Other matters may include:
The opportunities of using AI in the workplace are as fascinating as the challenges it may trigger, given the variety of legal areas that it involves.
Satya Staes Polet
Counsel
Further background on those broader matters is available in our blog post: Generative AI: Five things for lawyers to consider.
A business will often be required to take difficult decisions when deciding how to proceed with AI. Accordingly, it is important for companies using AI to implement strong governance arrangements to ensure a robust process is in place for documenting key decisions and achieving appropriate outcomes where AI is developed, implemented or used.
In relation to privacy, this should include companies considering the implications of using AI as part of existing data privacy and information security assessments. This may include addressing explainability, considering any novel security risks, and ensuring meaningful human review of decisions.