A global legal standard for Artificial Intelligence? The European Parliament proposes legislation

In 2016, the European Commission published the General Data Protection Regulation (GDPR). Since coming into force in 2018, GDPR-esque principles and laws have been adopted by many countries outside of the EU, resulting in GDPR being seen as the global standard for data privacy law.

 Given the prevalence of GDPR standards around the world and the size and importance of the EU market for global trade, it is rare for organisations not to have some consideration for the impact of GDPR on their business and trading activities. This type of global standard is what the EU expressly wishes to achieve in the world of artificial intelligence (AI).

Last month the European Parliament voted in favour of two legislative initiatives and a draft report comprising recommendations to the European Commission for a framework for AI that the Parliament hopes will boost innovation, ethical standards, and trust in AI. The proposals are particularly timely in light of new data showing that US patents in AI rose by more than 100% between 2002 and 2018; so, while the US may be the leader in AI creation, the EU may become the leader in its regulation.

The three publications focus on:

  • the ethical and legal obligations in deploying AI;
  • civil liability for damage caused by AI; and
  • the creation and protection of intellectual property rights (IPRs) generated by or through AI.

Each of the above makes recommendations for an appropriate legislative framework. Common to each publication is the designation of the “deployer” of AI - the person who exercises control over the functioning of the AI. As the deployer would also be subject to the rules, the regulations would apply to non-EU entities seeking to operate AI within the EU (a significant market for US technology firms) and mirror the extra-territorial scope of GDPR.

Ethics framework for AI

The legislative initiative regarding the ethical aspects of AI recommends that the European Commission implement a new legal framework outlining the ethical principles and legal obligations to be followed when developing, deploying and using AI, robotics and related technologies in the EU, including software, algorithms and data.

Much like GDPR, the initiative has several principles at its core which will underpin any legislation, obligations or requirements. These principles include:

  • a human-centric and human-made AI;
  • safety, transparency and accountability;
  • safeguards against bias and discrimination;
  • the right to redress;
  • social and environmental responsibility; and
  • respect for privacy and data protection.

Regarding data privacy, the initiative will align with GDPR and the EU Charter of Fundamental Rights to ensure that the rights of the individual are provided due protection when designing and deploying AI. The proposed rules would also cover the use of remote or biometric recognition technologies, such as tracing apps – an area of great focus at present given the “track and trace” programmes many countries have adopted to stop the spread of the COVID-19 virus, and the widely perceived invasive nature of such measures.

The initiative recommends that AI technologies that carry a significant risk of breaching the ethical principles of the regulation should be designed to allow for human oversight at any time. For example, if the AI has a self-learning capacity that could be dangerous and may result in a serious breach of ethical principles, then it should be possible to disable the functionality and for full human control to be restored.

The initiative reiterates a request made by the European Parliament in 2017 for the creation of a European Agency for Artificial Intelligence, which would be responsible for supervising and offering guidance on the application of the proposed regulations and for developing common criteria for a European certificate of ethical compliance. Developers and deployers of AI would be able to request a certificate to certify their compliance with the regulation.

Liability for AI causing damage

The legislative initiative concerning civil liability for AI calls for amending the current civil liability framework applicable in the EU to clarify who is liable for damage arising from the actions (or inactions) of AI. It claims that greater clarity in this area would promote trust in AI by deterring dangerous activities and stimulate innovation by providing businesses with legal certainty (a certainty that has, for example, been lacking in the conversation about liability for autonomous vehicles).

This initiative proposes a set of rules that would apply to physical or virtual AI activity that harms or damages life, health, physical integrity or property, or that causes significant harm if it results in “verifiable economic loss”. It proposes a “future-oriented” framework whereby deployers of AI designated as “high-risk” (including self-driving vehicles and autonomous robots) would be strictly liable for any damage caused by it. The person suffering such harm or damage would benefit from the presumption of fault of the deployer. The initiative also recommends:

  • that it be mandatory for deployers of high-risk AI to have liability insurance sufficient for the maximum amounts of compensation under the regulation (€10m in the event of death and physical harm and €2m for damage to property);
  • the methodology for how compensation should be calculated; and
  • the limitation periods which would be applicable for claims by injured parties.

Intellectual property rights (IPRs)

The report on IPRs calls for an impact assessment of AI development on IPRs. It states that EU global leadership in AI will require an effective legal system for AI-generated IPRs and safeguards for the EU’s patent system to protect innovative developers. However, it stresses that this should not come at the expense of human creators’ interests, nor the EU’s ethical principles on AI.

The “autonomisation” of the creative process raises issues about the ownership of IPRs. The report states that it would be inappropriate for AI to have legal personality and, accordingly, that AI should not be capable of owning IPR (an approach consistent with other jurisdictions around the world). The report advocates a framework whereby certain works generated by AI should be regarded as equivalent to intellectual works and therefore be capable of being protected by copyright, with the ownership generally assigned to the person who prepares and publishes a work lawfully. Such a framework would protect the technical and artistic creations generated through the medium of AI and encourage this form of creation.

The report notes the essential role of data in the development of AI and warns of the effect of lock-ins and the dominance of certain undertakings on reducing the accessibility of data and inhibiting innovation. As a means of stimulating innovation in AI, the report encourages the sharing of data generated in the EU. The mechanisms by which the EU may seek to facilitate data sharing in the future may have significant implications for holders of IPRs and the data world.

The European Commission will consider the initiatives and reports submitted by the European Parliament. Its legislative proposal on these matters is expected in early 2021.

Co-authored by trainee solicitor, Harrison Clark.

More information Register your interest