The EU’s AI Act: What does it mean for game developers?

Home
  >   Features

The EU’s AI Act: What does it mean for game developers?

The act – in place since 1st August 2024 – is designed to ensure safety and transparency, but with that responsibility comes the necessity for careful compliance

The EU’s AI Act: What does it mean for game developers?
  • The experts at Futura Digital explain what the act means for game devs seeking to embrace AI
  • All the while avoiding falling foul of any new legislation


Stay Informed

From the 1st August 2024, the Artificial Intelligence Act (aka the EU AI Act) came into force in the EU. The AI Act ensures that Europeans can trust what AI has to offer. And while most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that must be addressed to avoid undesirable outcomes.

In this article Alexandra Kurdyumova, co-founder of Futura Digital [left] and Nazar Volkov, junior associate [right] take a wade through this potential legal minefield and what it means for game developers seeking to embrace AI while avoiding falling foul of any legislation.


The EU’s AI Act may become the standard for regulating AI further across the world as it was introduced in relation to the EU General Data Protection Regulation dedicated to the privacy of users which was adopted in 2016.

Let’s see what is inside and how it may affect the game development industry as it starts to use AI more intensively.

Its purpose is to lay the foundation for the regulation of AI within the EU, to ensure that AI systems are safe, transparent and respect fundamental rights.

What is the EU AI Act?

The EU AI Act is a broad regulation which applies directly in all EU countries.

Its purpose is to lay the foundation for the regulation of AI within the EU, which will ensure that AI systems used by EU companies or addressed to EU residents are safe, transparent and respect fundamental rights. This means that companies located outside the EU but distributing their AI products to EU residents will also need to comply with the EU AI Act.

The act is designed on the basis of a risk-based approach. This means that the more significant that the potential threats from using an AI system become, the stricter rules will be applied to using such systems. The act regulates all kinds of AI systems, but rules out AI systems used for military, national security purposes and for personal R&D purposes.

What’s inside the EU AI Act?

The EU AI Act uses a pyramid of risks assessment, from very high, unacceptable levels, down to systems deemed to present minimal risk. 



In order from high to low therefore:

  • Unacceptable risk – Immediately prohibited since they contravene core EU values

Examples of prohibited systems are social scoring based on profiling of people, scraping of facial images from internet or CCTV footage to create facial recognition databases, emotion recognition in the workplace and educational institutions, and manipulative AI systems.

  • High risk AI systems – Those that entail threats to health, safety, the environment and fundamental rights.

This category is regulated in the broadest manner and includes systems which process personal data to assess various aspects of a person’s life, for example to evaluate the eligibility to credit, health or life insurance or public benefits, or to analyze job applications to evaluate candidates.

The providers of such systems shall be compliant with several obligations including risk management performance, guaranteeing dataset quality, preparing appropriate technical documentation, registration within the special EU database and others.

  • General purpose AI – The foundation for creating niche applications

This section includes AI models which are trained on broad data at scale and adapted to a wide range of tasks. For example image/speech recognition, audio/video generation, pattern detection and others. As a guide, Chat GPT-4 falls into this category.

AI in this category must comply with transparency requirements. Providers of such systems must prepare technical documentation, comply with copyright law and disclose information about training datasets.

  • Limited risk AI systems – But which still entail threats of deception or impersonation

This section consists of AI systems that interact with consumers. For example, AI used for the generation of content (image, audio or video), emotion recognition systems, or the AI generation of deepfakes.

The providers of AI of this category must inform consumers that they are interacting with AI and that the content produced was generated by AI.

  • Minimal risk – Unregulated since entails no high risk.

Minimal risk AI applications, including spam filters and AI-enabled video games.

How the EU AI Act relates to game developers

Simply put, the EU AI Act does not regulate AI used in games until it falls into the higher risk category. For example, if the game developer includes a deepfake of a real person in a game or the emotions of the gamer can be recognized by the game functions.

But even if this was the case, the developer will only be obliged to be compliant with transparency rules (i.e. to tell the gamers exactly what they are interacting with).



What risks does AI pose to game developers?

Since the games are not related to the above discussed threats for people, the key issue for game devs arising from the use of AI is copyright.

Are AI developers allowed to train their models on the copyrighted content of others? This debate has already become commonplace and the world awaits legislative and regulatory approaches to solve this puzzle.

And right now it’s difficult to choose between the options in this regard:

  • On the one hand, allowing the training of AI models on copyrighted content may give a significant boost to AI and the rate of future development.

  • On the other hand this approach will discourage authors of content from continuing their creative activity since they will be deprived of monetary incentives – monetary incentives which they otherwise could have if AI developers were obliged to license copyright content to train models.

The problem gets more serious when comparing different jurisdictions. In the USA and UK this question may be solved with “fair use” doctrine which may allow the training of AI models on copyrighted content. 

However, it’s not such an obvious solution for Europe since EU legislation describes a strict list of cases for “fair use” and AI training is not one of them.

It therefore appears that existing laws are insufficient to address this issue. This question cannot be answered solely by examining legal technicalities within legislation. It is more likely that the answer lies within legal policy and would require the careful assessment of the potential benefits and changes to world economic and social development before a decision can be made.

close icon
close icon

Similar Posts