Blog   

What Are the Consequences of the EU AI Act for B2B Companies in Digital Commerce and Marketing?

Photo by Anastasia Linnik

Anastasia Linnik

Chief Artificial Intelligence Officer, Retresco

Teilen

Artificial Intelligence (AI) has become an integral part of digital commerce and marketing, enabling companies to implement innovative projects, develop personalised customer interactions and operate more efficiently through smart digitalisation and automation. This is particularly true for the field of Natural Language Processing in general and so-called Foundation Models in particular.

In response to the rapid development of such AI technologies, the European Union is currently introducing the so-called "EU AI Act" to regulate the use of AI and ensure that ethical and legal minimum standards are met. In this blog post, we provide an assessment of the potential impacts of this EU AI regulation on the area of Natural Language Processing and the underlying Foundation Models. The article is aimed at B2B companies in digital commerce and marketing. It should be noted that we do not and cannot provide legal advice.

The EU AI Act.

What are Foundation Models?

Foundation Models are AI models that serve as a basis for various applications and tasks. They are usually developed through extensive training from large amounts of data, with human language being processed and generated in the field of Natural Language Processing. The AI models are capable of understanding complex relationships and automatically generating texts. Foundation Models can be used for a wide range of use cases, supporting machine translation, text generation, FAQ development, chatbots, and much more. Their context-sensitive understanding allows them to analyse, process and interpret large volumes of unstructured data.

Recently, Foundation Models have received much public attention in the form of large language models such as ChatGPT, the various GPT versions, or BERT. Indeed, the impressive performance and versatile applicability are remarkable, offering enormous potentials and application possibilities across industries – combined with significant efficiency gains.

However, Foundation Models also pose some challenges. They require large amounts of data for training to work effectively. However, this information also leads to susceptibility to biases ("hallucinations") and prejudices. It is therefore important to pay attention to ethical aspects and the quality of the underlying data when using such AI models.

What does the EU AI Act aim to regulate?

The EU AI Act is a draft law of the European Union that aims to make AI safer and more transparent. AI providers will be required to develop mechanisms to protect against questionable, misleading, or concerning content. In addition, AI models should be optimised concerning data protection, predictability, and interpretability.

Foundation Model providers will have to disclose information on how their AI models work and fulfil certain obligations in the future, regardless of the specific area of application. This contradicts the original idea of relating the assessment to the application rather than the underlying technology.

The EU AI Act categorises AI systems into different risk classes to ensure appropriate regulations. Low risk means little regulation, while risky applications like social credit systems or citizen surveillance are prohibited. Higher risk, such as systems that make treatment suggestions in healthcare, are subject to stricter requirements.

In total, the EU AI Act includes four high-risk applications with different levels of regulation:

  1. Biometric identification systems: Protection of privacy and prevention of misuse or discrimination in biometric data such as facial recognition or fingerprint scans.

  2. Critical infrastructures: Control of AI systems in areas such as power supply, transport, or healthcare due to potential malfunctions or attacks.

  3. Education and assessment: Risk assessment of AI systems in education that can be unfair or discriminatory.

  4. Healthcare: Control of AI models in healthcare, such as diagnostic support systems, telemedicine, and processing of medical data regarding data protection and accuracy.

The challenge is to classify the latest generation of AI systems, such as Foundation Models, according to these risk classes. As they are suitable for many tasks, classification will not be that easy. The debate on the regulation of these systems is technical and intense, both among experts and the public.

What is the EU AI Act about in relation to Foundation Models?

The regulation of Foundation Models within the EU AI Act requires careful consideration to create a legal framework that simultaneously does not limit the possibilities and potentials of powerful AI models. The goal is to maintain a healthy ecosystem of AI providers in the European area. Such providers should be able to grow in the future without being disadvantaged by legal restrictions compared to non-European competition.

However, there are several reasons why the EU has decided to develop this AI regulation:

  1. Transparency

    The EU AI Act emphasises the need for transparency in the use of AI systems. Foundation Models such as ChatGPT or the various GPT versions are black box approaches where it is difficult to impossible for users to understand the results of the AI model. The EU AI Act aims to make these Foundation Models more transparent and understandable.
  2. Data security and data protection

    Foundation Models require large amounts of training data to be effectively usable. The EU AI Act is expected to include provisions on the protection of personal data and data security. The training and further processing of data is to be regulated, and the highest possible level of data protection ensured.
  3. Discrimination and biases

    Foundation Models can have biases and prejudices due to underlying training data. The EU AI Act aims to make Foundation Models free from bias and not to deliver discriminatory results concerning gender, religion, race, or ethnic origin.
  4. Compliance and certification

    The EU AI Act intends to establish appropriate testing procedures and standards for AI models, hence the development of corresponding certification systems is planned. The requirements for Foundation Models are complex and require special procedures to ensure conformity.
  5. Liability and accountability

    The EU AI Act would establish legal obligations for AI providers and introduce corresponding liability for any damage caused. The definition of liability in Foundation Models is exceptionally difficult, as the output or language result is based on extensive training data and depends on various factors.

Providers of foundation models will likely be required to disclose certain characteristics of their AI models and the nature of their data usage. They must identify biases and take appropriate countermeasures to prevent these outputs or language results. At the same time, information about the data used in the AI models should be made transparent.

In addition, foundation providers are encouraged to take appropriate measures to make their models interpretable and predictable, as well as to minimise the risk associated with their use. This poses a significant challenge, as a wide range of hypothetical risk scenarios need to be considered, raising the question of who in the EU will develop and operate such models in the future.

Despite the positive aspects of incorporating foundation models into the EU AI Act, risk assessment should continue at the application level to avoid hindering innovation. Particularly smaller model and open source providers could otherwise face significant difficulties, as the exemptions for open source applications in the draft do not apply to Foundation Models.

Therefore, it is now essential to consider the EU AI Act from the outset when planning applications to be realised in 1-2 years' time ("AI Act compliant by design"). Foundation Model issues are relevant, as is the entire regulatory framework that must be fulfilled, including certification and other requirements. These aspects should be progressively taken into consideration and observed when planning and implementing AI applications.

What are the implications for digital commerce and marketing in B2B settings?

The EU AI Act aims to regulate the use of AI technologies to ensure their conformity with ethical and legal standards. This means that "high-risk" AI applications will be subject to stricter regulation. B2B companies in the field of digital commerce and marketing must therefore expect that the deployed AI-based tools and algorithms will need certification or approval in the future.

The EU AI Act may also have an impact on personalised recommendation systems and automated decision-making processes. It is therefore advisable for B2B companies in this area to take appropriate measures to ensure greater transparency in the applied AI algorithms. At the same time, labelling requirements for automated speech and text outputs are expected.

Further challenges within the framework of the EU AI Act will likely include adjusting one's own data protection measures, ensuring transparency and traceability of AI algorithms, and clarifying liability and responsibility issues. Companies should consider adapting their internal policies and review processes as well as monitoring AI applications accordingly.

For example, Apple and Samsung have banned the use of ChatGPT for data protection reasons following a corporate data leaks. Overall, the question remains to what extent so-called transformer models will be usable in the Member States following the adoption of the EU AI Act. Such models may also be faced with the challenge that in case of end-to-end use by consumers, direct access to the Foundation Model is required and thus encryption is not possible.

At the same time, the EU AI Act presents an opportunity for B2B companies to strengthen customer trust by protecting data, privacy, and ethical standards. By abiding by the expected new regulations, companies can ensure that their AI technologies and strategies meet legal requirements.

In general, B2B companies should carefully examine in the selection of SaaS services where the servers and data centres are located and how this relates to their own company headquarters, international market presence, and corresponding data protection regulations. The choice between using a SaaS service with servers in the EU or in non-European locations can have significant consequences on data integrity, privacy, and compliance with regulatory requirements.

When does the EU AI Act come into force?

The European Commission originally presented the EU AI Act in April 2021 to harmonise rules for Artificial Intelligence across all EU Member States. The regulatory idea was not due to OpenAI and ChatGPT but had been planned several years earlier. At this point, the relevant committees have agreed on a position, which is why the legislative draft now enters the trilogue phase, in which it will be negotiated by the European Commission, the Council, and Parliament. As a regulation, the EU AI Act will take effect immediately in each of the 27 EU Member States once finalised. In December 2022, the Council of the European Union approved a revised version of the EU AI Act. The Council's draft mostly corresponds to the original proposal by the European Commission from April 2021.

In January 2023, work began on defining universal AI standards largely based on the International Organisation for Standardisation (ISO) guidelines. It is expected that the EU AI Act will come into force between late 2023 and early 2024. Transition periods will be granted after its entry into force. The aim is to ensure responsible and sustainable use of artificial intelligence across the Union. It will provide companies and users with clear rules and guidance, enabling them to harness the full potential of AI without infringing on fundamental rights and data protection.

Conclusion: The EU AI Act is coming

The EU AI Act represents an important milestone for the regulation of AI applications in the EU area. The European ecosystem has high hopes for AI providers, as not "over-regulated" legal guidelines can help support innovation and efficiency enhancement while addressing specific problems and risks.

Companies need to be aware of the consequences and ensure that their AI technologies and strategies comply with the new regulations. Therefore, it is crucial to follow the development of the EU AI Act closely and, if necessary, liaise with experts in AI regulation to ensure that companies continue to create innovative and ethically responsible AI solutions in digital commerce and marketing. Overall, it is advisable to prepare for the EU AI Act now, just as with the GDPR introduction.

For questions and more information about Retresco's offerings and possibilities, we are available. For questions about the EU AI Act, we are happy to put you in touch with competent legal advice. Feel free to reach out – our experts will gladly get back to you!

Back to the news overview