The AI Act for E-commerce and Retailers

Martin Rosvall
25 Apr 2024
2-3 min read
Martin Rosvall Holds Lecture

Martin Rosvall, Professor of Physics with a focus on Computational Science, Umeå University. Lead Scientist and Co-founder of Sift Lab AB.

After three years of negotiation, the European Parliament approved the AI Act in mid-March. It is the world's first major initiative for regulating AI, aiming to turn Europe into a global hub for trustworthy AI. Consider it the AI equivalent of GDPR, protecting human rights, safety, and ethical standards.

The AI Act approaches AI systems based on their risks. More stringent regulations and faster implementation timelines apply to high-risk AI systems, while more lenient rules govern those posing less risk. The AI Act defines four levels of risk:

Unacceptable risk:
AI that threatens people's rights or manipulates their behavior will be forbidden. Examples include AI technology for hiring decisions and biometric categorization systems based on untargeted scraping of facial images.

High risk:
AI systems that could impact critical infrastructures, employment, legal systems, and other applications with significant effects on citizens will be subject to strict obligations before introduction.

Limited risk:
Tools such as emotion detectors, basic chatbots, image editors, fake image or video software, and AI for predictions and personal suggestions must be transparent and ensure that humans are informed to foster trust.

Minimal risk:
AI-powered video games, spam filters, and similar applications will have no restrictions.

The AI Act will become applicable in two years with some exceptions. For example, prohibitions will take effect after six months, and the obligations for general-purpose AI models after 12 months. Any firm that violates the rules risks a fine of up to seven percent of its annual global profits.

For e-commerce and retailers, the minimal risk and limited risk levels apply. Transparency means that when customers use chatbots or receive personal recommendations, they should be made aware that they are interacting with a machine to make an informed decision to continue. AI-generated text, audio, and video must also be identifiable.

Recent advances in large language models led to late revisions of the AI Act. They fall into their own two-level category. Besides full transparency about the architecture and data sources, models with high-impact capabilities, including GPT-4 and other big models requiring at least 10^25 FLOPs for training, will be subject to stringent safety testing. Smaller general-purpose models will be subject to transparency requirements similar to the limited risk level.

At Siftlab, we welcome the AI Act and its focus on transparency, a principle that has always guided our work. We are committed to helping you navigate your data-driven strategy with clarity and trust. Please do not hesitate to reach out if you have any questions or need assistance.

Martin Rosvall
25 Apr 2024
2-3 min read

Ready to become best friends with your data?

Check - Elements Webflow Library - BRIX Templates
Thanks for joining our newsletter
Oops! Something went wrong while submitting the form.
Enter your email here too book an intro and we'll reach out to you as soon as we can.