On December 8, 2023, there was a significant event regarding the regulation of AI in Europe. After three days of thorough discussions, the EU Parliament, Council, and Commission have ultimately reached a preliminary agreement on the EU AI Act, which is a significant law governing the advancement and utilization of AI in Europe. This is one of the earliest global efforts to govern the use of AI.
Suggested steps
Before the official implementation of the EU AI Act, organisations should perform risk assessments, if they have not already done so, to determine the effect of the EU AI Act on their businesses.
Therefore, we suggest that companies:
Examine the implementation and utilisation of AI within their organization and supplier chains
determine which AI principles and boundaries should be established (expected to encompass ethical issues that surpass legal requirements, including those outlined in the EU AI Act)
Evaluate and enhance current risks and controls for AI as needed (including complying with relevant EU AI Act standards), both at the corporate and product lifecycle levels
Find the appropriate individuals responsible for AI risk management and the internal team(s) in charge of governance.
Review current vendor due diligence procedures for both (i) acquiring AI technology and (ii) acquiring third-party services, products, and outputs that may involve the use of AI, including generative AI systems.
evaluate current contract templates determine any necessary revisions to address AI risk, and
Keep track on AI and AI-related laws, guidance, and standards worldwide to make sure that the company’s AI governance structure stays up to speed with any new global developments.
The EU AI Act has taken a while to develop, beginning with the EU Commission’s Proposal for a Regulation on AI in 2021. After the surge in popularity of AI large language models in 2023, the regulation has had to quickly adapt to keep up with technical breakthroughs. Current setbacks in the approval of the legislation pertain to discussions regarding the inclusion of regulations for AI foundation models. These models are highly advanced AI systems that are trained on extensive datasets and possess the capability to learn and execute various tasks. Additionally, there are debates surrounding the utilisation of AI in law enforcement.
The legislation adopts a methodical, risk-oriented approach to the oversight of AI technologies. AI is defined based on the concept of the OECD to differentiate it from less complex software systems. The risk category of a technology determines its manufacturers’ and deployers’ obligations. Technologies that present “”unacceptable”” levels of hazard are prohibited, while “”high-risk”” technologies are subject to tight limitations. The list of forbidden technologies includes biometric identification systems, with limited law enforcement exceptions, as well as any other systems that utilise intentionally manipulative techniques or social scores, such as predictive policing systems and emotional recognition systems. It is illegal to scrape facial images from the internet and CCTV without having specific targets in mind, and artificial intelligence (AI) used to create altered images, such as “deep fakes,” must disclose that it was AI that produced the images.
The Act now includes foundation models, and it applies a tiered and risk-based approach to the duties imposed on these models. Although specific information about the legislation is still to be revealed, the EU has reached a consensus on a two-tiered strategy for these models. This strategy includes “”transparency requirements for all general-purpose AI models (such as ChatGPT)”” and “”more stringent requirements for powerful models that have systemic effects.”” A dedicated AI office will be established within the European Commission to supervise the regulation of the most sophisticated AI models.
Regarding responsibilities under the Act, individuals seeking to offer and implement AI encounter particular limitations related to transparency and safety. To reduce risks to areas including health, safety, human rights, and democracy, developers of high-risk AI should employ safeguards during various stages, such as design and testing. This involves evaluating and reducing risks, as well as enrolling models in an EU database. Some users of AI systems that have a high level of risk and are operated by public organisations are also required to register in the EU database.
Penalties associated with illegal practices can reach a maximum of EUR 35 million or 7% of a company’s annual global revenue. Violations of the Act’s obligations or the provision of false information result in penalties of EUR 15 million or 3% of turnover, and EUR 7.5 million or 1.5% respectively. There are rules in place to ensure that fines for SMEs and start-ups are fair if they violate the restrictions of the AI Act. It is yet unclear how the Act will be enforced.
The temporary agreement clarifies that the EU AI Act only applies within the boundaries of EU law. This means that it still applies to suppliers of AI systems that are sold in the EU market, regardless of whether they are based in the EU or not. Additionally, it does not impact the authority of member states in matters of national security. It also doesn’t include AI systems used only for research and innovation or individuals employing AI for non-professional purposes. The law will be effective two years after it is implemented, with certain parts having specified exceptions.
Several technological groups and European corporations have expressed worries about the legislation, worrying that it may hinder innovation in Europe, especially with foundation models. Technology organisations contended that the applications of AI, as opposed to the technology itself, ought to be subject to regulation (which aligns more closely with the approach now being adopted in numerous other regions.) However, EU officials think that their last discussions have struck a more equitable compromise between facilitating innovation and encouraging responsible technology.
Companies should keep in mind that adhering to the EU AI Act will just be one aspect of a business’ Responsible AI governance programme. Although the EU praises the EU AI Act as the first comprehensive AI legislation, lawmakers all over the world are introducing numerous AI-related advancements. Additionally, regulators are already examining organisations’ adherence to current laws regarding AI, such as data privacy, consumer protection, and discrimination.
At present, the EU AI Act is awaiting formal adoption by both the European Parliament and Council to become EU law. Recent remarks from both proponents and critics of the temporary accord suggest a possibility that the temporary agreement may not be granted its ultimate endorsement shortly. We will keep an eye on this progress.
Thanks to Karen Battersby (Director of Knowledge for Industries and Clients) and Kathy Harford (LKL for IP and Data & Technology) for their contributions to this alert.
The acknowledged experts in AI at Baker McKenzie are assisting multinational firms with strategic advice for safe and compliant AI development and deployment. Our industry professionals with knowledge in technology, data privacy, intellectual property, cybersecurity, trade compliance, and employment can assist you at any point in your Responsible AI journey to discuss the most recent developments in legislative and regulatory proposals, as well as the associated legal risks and considerations for your organisation.
Stay Updated about the latest technological developments and reviews by following TechTalk, and connect with us on Twitter, Facebook, Google News, and Instagram. For our newest video content, subscribe to our YouTube channel.
Read More: Blockchain-Powered Shrapnel FPS Enters Epic Games