AI Act

AI Act and the video industry - what you need to know as a creator or marketer

The AI Act is the first law in the European Union that clearly defines how artificial intelligence can (and cannot) be used. It will go into effect on August 1, 2024, and will apply to anyone who uses AI in content creation - including in the video industry.

In short: if you create video with AI help, edit, personalize ads or use deepfakes - this regulation applies to you.

In this article, we explain:

  • What are the risk levels according to the AI Act,

  • Which applications of AI in video are legal and which are banned,

  • What responsibilities creators, marketers and agencies have.


What exactly is the AI Act?

This is a new European Union law that specifies how artificial intelligence tools should be designed and used - so that they are safe, transparent and do not infringe on people's rights. For the creative industry, especially video, this means specific requirements.

Find the full text of the law here:
👉 https://eur-lex.europa.eu/eli/reg/2024/1689/oj


How does the AI Act divide AI systems?

The Union introduces four levels of risk. The level determines what is allowed and what is not - and what responsibilities you have as a creator or marketer.

1. prohibited (unacceptable risk)

Some applications of AI are completely prohibited. These include:

  • Real-time facial recognition on the street (unless done by police with a warrant),

  • deepfakes that are not labeled and may be misleading (e.g., political),

  • AI detecting emotions in schools or companies,

  • Manipulation of children's perception in advertising campaigns.

What does this mean for the video industry?
You can't create untagged deepfakes, impersonate people, manipulate children's emotions or use biometrics without clear permission.


2. high risk

This is about systems that can affect someone's life or rights - for example:

  • AI that analyzes candidates based on video footage (e.g., facial expressions, tone of voice),

  • Student assessment systems in online courses,

  • Employee monitoring based on image analysis.

Your responsibilities:
If you use such AI - you must keep records, implement tests, provide human supervision and report the system to the EU registry.


3. limited risk

These are popular tools that do not decide a person, but can influence his decisions. Examples:

  • video chatbots,

  • AI-presenters or voiceovers,

  • personalized video ads,

  • deepfakes labeled as fake.

Requirements?
You must clearly inform the user that the content was generated by AI. Without trying to hide it.


4. minimal risk

These are tools that require no paperwork. For example:

  • AI for video editing,

  • automatic subtitles and translations,

  • Content recommendation algorithms (e.g., Shorts, TikTok, reels).


What does this mean in practice?

If you are creating video:

  • Tag all AI-generated content - even if just a voice or face.

  • Don't use unlabeled deepfakes - it may be illegal.

  • If your tool analyzes people (e.g., recruitment), prepare documentation and check compliance.

If you are doing video marketing:

  • Make sure campaigns are legitimate - especially those targeting children.

  • Work only with tools that are AI Act compliant (i.e., offer transparency, documentation and signage).

If you are an agency or production house:

  • Ask AI tool vendors if their solutions are AI Act compliant.

  • Educate your customers - you are responsible for the legality of what you publish.

  • Prepare the team for possible audits or customer questions.


Examples of AI applications in video by risk level:

ApplicationRisk levelWhat you need to do
Deepfake with politician unmarkedForbiddenIt is not allowed
Analysis of candidate's record in recruitmentHighDocumentation, supervision
AI-presenter in advertisingLimitedClear marking
Video editing by AIMinimumNo requirements

How to prepare?

  1. Check where in your creative process you are using AI.

  2. Evaluate which category the tool falls into.

  3. If you need to - tag the content, collect documentation or change the tool.

  4. Choose reliable suppliers who know the regulations.

  5. Establish internal policies for working with AI - before anyone asks about legal compliance.


Summary

The AI Act is not only a new obligation, but also a new standard.
It regulates what until now has often been done "on its own" - that is, the use of AI in content creation.

What this means for the video industry is the need for greater accountability, transparency and sometimes simple measures - like adding an "AI-generated" label.

It is better to implement these principles now than to risk later. It's an investment in the trust of your audience and customers.

x

Examples of graphic and sketch storyboards