How Will the EU AI Act Affect Brands and Retailers?

Brands and retailers could soon have to comply with the EU’s regulations around 2024’s most buzzy technology: artificial intelligence.

As the AI—and particularly generative AI—frenzy continues for enterprises and consumers globally, some states and countries have begun working to outline guidelines, laws and regulations on AI.

More from Sourcing Journal

Globally, the EU AI Act is considered some of the most comprehensive legislation that exists for that corner of technology. It outlines restrictions and sanctions for both the developers and the users of AI.

Tom Whittaker, an attorney at Burges Salmon, said that the looming law will categorize AI systems into a number of buckets based on the risk the system brings to EU citizens.

“What the AI Act does is to say that there are certain types of AI systems which are prohibited because they introduce an unacceptable risk for fundamental rights for EU member citizens,” he said.

Several of the unacceptable risks in the EU AI Act include “Social scoring: classifying people based on behavior, socio-economic status or personal characteristics,” “biometric identification and categorization of people” and “real-time and remote biometric identification systems, such as facial recognition.”

For most fashion and apparel companies, the pending legislation, which Whittaker and other experts say would be unlikely not to pass the EU Parliament this year, would mean that they need to take a closer look at the third-party partners they use for composable commerce and off-the-shelf AI solutions.

That’s because any company that uses customer-facing AI in the EU, whether in brick-and-mortar stores or online, will be considered a “deployer,” which carries a burden to fall in line with the new law per the most recent version.

Whittaker said even if a company is based outside the EU, the incoming regulations will apply if it impacts citizens of the EU.

“[The EU AI Act] is designed to operate internationally in the sense that the EU’s concern is about risk to EU citizens in the EU—so any AI system which is deployed or put into service on the EU market, or where the outputs could affect those in the EU market,” Whittaker explained.

Whittaker said that while the exact use cases prohibited by these restrictions have thus far remained unclear, fashion and apparel companies may need to consider the “social scoring” piece when enacting some use cases around customer personalization.

For instance, some companies have begun using AI to put restrictions on—or add costs to—returns for some customers based on their behavior with a retailer or brand, while other customers who are considered loyal, legitimate shoppers of the same retailer or brand receive preferential treatment based on their history.

Whittaker said while it isn’t yet clear whether that kind of use case would violate the AI Act, brands and retailers will need to consult their legal teams when considering an application for the technology that could fall under the prohibited category of the AI Act.

“Until you really take a specific use case, and you work through that wording and you also take the legal approach of just going, ‘What could that mean, and how could you argue that in different ways?’ there is a risk that those sorts of systems could fall afoul of this [legislation],” Whittaker said.

While some use cases for AI would be completely banned by the AI Act, other use cases will soon be considered “high risk.” Those high-risk use cases are what retailers and brands should be thinking about as they work to ensure their AI policies and use cases align with the requirements set forth by the EU, per Maarten Stassen, an attorney at Crowell & Moring LLP.

“What is banned is, I don’t think, the biggest challenge [for this space]. There’s the high-risk AI where there’s more obligations about conformity assessments, fundamental rights, impact assessments, and that might be, indeed, where they should be looking at, or what will change the way they work with their providers,” Stassen told Sourcing Journal.

Per the EU, any AI system that would be considered high risk under the EU AI Act will be assessed before being put on the market and re-assessed as they progress. High-risk AI systems include technology that enables “employment, worker management and access to self-employment,” which could impact retailers and brands.

Another area that could impact brands and retailers comes in the form of transparency and disclosure around their use of generative AI.

Some brands have begun using generative AI for product photos, social media campaigns and more. The EU AI Act could require “disclosing that the content was generated by AI.” Per the EU, “Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.”

Deepfakes are images, audio or video that are made to sound or look like a real-life person. Brands using models’ digital likenesses in campaigns could soon be required to share that their images had been generated by—or in tandem with—AI, a concept that some brands have already received consumer backlash for.

Whittaker said the actuality of transparency could be murky.

“What good transparency and necessary transparency looks like could be subject to further guidance, and ultimately it will come down to a context-based, risk-based approach,” he said.

The AI Act has already been adopted by the EU’s 27 member states, though it faced some resistance from several countries. Two committees in the EU Parliament, the civil liberties and internal market committees, put their support behind the pending law Tuesday. In April, it will be floated to the EU Parliament in April for a plenary vote. It must pass both chambers of Parliament to be enacted into EU law, at which point it would need to be adopted by Council.

Stassen and Whittaker said they expect there could be a “halo effect” from the EU AI Act, just as the General Data Protection Regulation (GDPR) set the gold standard for many global privacy regulations. Thus far, the UK and U.S. have been lagging in terms of official legislation on the technology.

“I think for most of the world, it’s likely to set the bar pretty high because the EU is known for having comprehensive—some would say high-burdensome—regulations. I say most of the world because China has a statute around generative AI, which requires people to use it in a way which is consistent with Communist values, and I assume that can be quite tricky to do,” Whittaker said. “For most of the world, that would mean that the EU legislation will be likely to be the higher bar.”