Sensitive data ruling by Europe's top court could force broad privacy reboot

A ruling put out yesterday by the European Union's top court could have major implications for online platforms that use background tracking and profiling to target users with behavioral ads or to feed recommender engines that are designed to surface so-called 'personalized' content.

The impacts could be even broader -- with privacy law experts suggesting the judgement could dial up legal risk for a variety of other forms of online processing, from dating apps to location tracking and more. Although they suggest fresh legal referrals are also likely as operators seek to unpack what could be complex practical difficulties arising from the judgement.

The referral to the Court of Justice of the EU (CJEU) relates to a Lithuanian case concerning national anti-corruption legislation. But the impact of the judgement is likely to be felt across the region as it crystalizes how the bloc's General Data Protection Regulation (GDPR), which sets the legal framework for processing personal data, should be interpreted when it comes to data ops in which sensitive inferences can be made about individuals.

Privacy watchers were quick to pay attention -- and are predicting substantial follow-on impacts for enforcement as the CJEU's guidance essentially instructs the region's network of data protection agencies to avoid a too-narrow interpretation of what constitutes sensitive data, implying that the bloc's strictest privacy protections will become harder for platforms to circumvent.

In an email to TechCrunch, Dr Gabriela Zanfir-Fortuna, VP for global privacy at the Washington-based thinktank, the Future of Privacy Forum, sums up the CJEU's "binding interpretation" as a confirmation that data that are capable of revealing the sexual orientation of a natural person "by means of an intellectual operation involving comparison or deduction" are in fact sensitive data protected by Article 9 of the GDPR.

The relevant bit of the case referral to the CJEU related to whether the publication of the name of a spouse or partner amounted to the processing of sensitive data because it could reveal sexual orientation. The court decided that it does. And, by implication, that the same rule applies to inferences connected to other types of special category data.

"I think this might have broad implications moving forward, in all contexts where Article 9 is applicable, including online advertising, dating apps, location data indicating places of worship or clinics visited, food choices for airplane rides and others," Zanfir-Fortuna predicted, adding: "It also raises huge complexities and practical difficulties to catalog data and build different compliance tracks, and I expect the question to come back to the CJEU in a more complex case."

As she noted in her tweet, a similarly non-narrow interpretation of special category data processing recently got the gay hook-up app Grindr into hot water with Norway's data protection agency, leading to fine of €10M, or around 10% of its annual revenue, last year.

GDPR allows for fines that can scale as high as 4% of global annual turnover (or up to €20M, whichever is greater). So any Big Tech platforms that fall foul of this (now) firmed-up requirement to gain explicit consent if they make sensitive inferences about users could face fines that are orders of magnitude larger than Grindr's.

Ad tracking in the frame

Discussing the significance of the CJEU's ruling, Dr Lukasz Olejnik, an independent consultant and security and privacy researcher based in Europe, was unequivocal in predicting serious impacts -- especially for adtech.

"This is the single, most important, unambiguous interpretation of GDPR so far," he told us. "It’s a rock-solid statement that inferred data, are in fact [personal] data. And that inferred protected/sensitive data, are protected/sensitive data, in line of Article 9 of GDPR."

"This judgement will speed up the evolution of digital ad ecosystems, towards solutions where privacy is considered seriously," he also suggested. "In a sense, it backs up the approach of Apple, and seemingly where Google wants to transition the ad industry [to, i.e. with its Privacy Sandbox proposal]."

Since May 2018, the GDPR has set strict rules across the bloc for processing so-called 'special category' personal data -- such as health information, sexual orientation, political affiliation, trade union membership etc -- but there has been some debate (and variation in interpretation between DPAs) about how the pan-EU law actually applies to data processing operations where sensitive inferences may arise.

This is important because large platforms have, for many years, been able to hold enough behavioral data on individuals to -- essentially -- circumvent a narrower interpretation of special category data processing restrictions by identifying (and substituting) proxies for sensitive info.

Hence some platforms can (or do) claim they're not technically processing special category data -- while triangulating and connecting so much other personal information that the corrosive effect and impact on individual rights is the same. (It's also important to remember that sensitive inferences about individuals do not have to be correct to fall under the GDPR's special category processing requirements; it's the data processing that counts, not the validity or otherwise of sensitive conclusions reached; indeed, bad sensitive inferences can be terrible for individual rights too.)

This might entail an ad-funded platforms using a cultural or other type of proxy for sensitive data to target interest-based advertising or to recommend similar content they think the user will also engage with. Examples of inferences could include using the fact a person has liked Fox News' page to infer they hold right-wing political views; or linking membership of an online Bible study group to holding Christian beliefs; or the purchase of a stroller and cot, or a trip to a certain type of shop, to deduce a pregnancy; or inferring that a user of the Grindr app is gay or queer.

For recommender engines, algorithms may work by tracking viewing habits and clustering users based on these patterns of activity and interest in a bid to maximize engagement with their platform. Hence a big-data platform like YouTube's AIs can populate a sticky sidebar of other videos enticing you to keep clicking. Or automatically select something 'personalized' to play once the video you actually chose to watch comes to an end. But, again, this type of behavioral tracking seems likely to intersect with protected interests and therefore, as the CJEU rules underscores, to entail the processing of sensitive data.

Facebook, for one, has long faced regional scrutiny for letting advertisers target users based on interests related to sensitive categories like political beliefs, sexuality and religion without asking for their explicit consent -- which is the GDPR's bar for (legally) processing sensitive data.

Although the tech giant now known as Meta has avoided direct sanction in the EU on this issue so far, despite being the target of a number of forced consent complaints -- some of which date back to the GDPR coming into application more than four years ago. (A draft decision by Ireland's DPA last fall, apparently accepting Facebook's claim that it can entirely bypass consent requirements to process personal data by stipulating that users are in a contract with it to receive ads, was branded a joke by privacy campaigners at the time; the procedure remains ongoing, as a result of a review process by other EU DPAs -- which, campaigners hope, will ultimately take a different view of the legality of Meta's consent-less tracking-based business model. But that particular regulatory enforcement grinds on.)

In recent years, as regulatory attention -- and legal challenges and privacy lawsuits -- have dialled up, Facebook/Meta has made some surface tweaks to its ad targeting tools, announcing towards the end of last year, for example, that it would no longer allow advertisers to target sensitive interests like health, sexual orientation and political beliefs.

However it still processes vast amounts of personal data across its various social platforms to configure "personalized" content users see in their feeds. And it still tracks and profiles web users to target them with "relevant" ads -- without providing people with a choice to deny that kind of intrusive behavioral tracking and profiling. So the company continues to operate a business model that relies upon extracting and exploiting people's information without asking if they're okay with that.

A tighter interpretation of existing EU privacy laws, therefore, poses a clear strategic threat to an adtech giant like Meta.

YouTube's parent, Google/Alphabet, also processes vast amounts of personal data -- both to configure content recommendations and for behavioral ad targeting -- so it too could also be in the firing line if regulators pick up the CJEU's steer to take a tougher line on sensitive inferences. Unless it's able to demonstrate that it asks users for explicit consent to such sensitive processing. (And it's perhaps notable that Google recently amended the design of its cookie consent banner in Europe to make it easier for users to opt out of that type of ad tracking -- following a couple of tracking-focused regulatory interventions in France.)

"Those organisations who assumed [that inferred protected/sensitive data, are protected/sensitive data] and prepared their systems, should be OK. They were correct, and it seems that they are protected. For others this [CJEU ruling] means significant shifts," Olejnik predicted. "This is about both technical and organisational measures. Because processing of such data is, well, prohibited. Unless some significant measures are deployed. Like explicit consent. This in technical practice may mean a requirement for an actual opt-in for tracking."

"There’s no conceivable way that the current status quo would fulfil the needs of GDPR Article 9(2) paragraph by doing nothing," he added. "Changes cannot happen just on paper. Not this time. DPAs just got a powerful ammunition. Will they want to use it? Keep in mind that while this judgement came this week, this is how the GDPR, and EU data protection law framework, actually worked from the start."

The EU does have incoming regulations that will further tighten the operational noose around the most powerful 'Big Tech' online platforms, and more rules for so called very large online platforms (VLOPs), as the Digital Markets Act (DMA) and the Digital Services Act (DSA), respectively, are set to come into force from next year -- with the goal of levelling the competitive playing field around Big Tech; and dialling up platform accountability for online consumers more generally.

The DSA even includes a provision that VLOPs that use algorithms to determine the content users see (aka “recommender systems”) will have to provide at least one option that is not based on profiling -- so there is already an explicit requirement for a subset of larger platforms to give users a way to refuse behavioral tracking looming on the horizon in the EU.

But privacy experts we spoke to suggested the CJEU ruling will essentially widen that requirement to non-VLOPs too. Or at least those platforms that are processing enough data to run into the associated legal risk of their algorithms making sensitive inferences -- even if they're not consciously instructing them to (tl;dr, an AI blackbox must comply with the law, too).

Both the DSA and DMA will also introduce a ban on the use of sensitive data for ad targeting -- which, combined with the CJEU's confirmation that sensitive inferences are sensitive data, suggests there will be meaningful heft to an incoming, pan-EU restriction on behavioral advertising which some privacy watchers had worried would be all-too-easily circumvented by adtech giants' data-mining, proxy-identifying usual tricks.

Reminder: Big Tech lobbyists concentrated substantial firepower to successfully see off an earlier bid by EU lawmakers, last year, for the DSA to include a total ban on tracking-based targeted ads. So anything that hardens the limits that remain is important.

Behavioral recommender engines

Dr Michael Veal, an associate professor in digital rights and regulation at UCL's faculty of law, predicts especially "interesting consequences" flowing from the CJEU's judgement on sensitive inferences when it comes to recommender systems -- at least for those platforms that don't already ask users for their explicit consent to behavioral processing which risks straying into sensitive areas in the name of serving up sticky 'custom' content.

One possible scenario is platforms will respond to the CJEU-underscored legal risk around sensitive inferences by defaulting to chronological and/or other non-behaviorally configured feeds -- unless or until they obtain explicit consent from users to receive such 'personalized' recommendations.

"This judgement isn't so far off what DPAs have been saying for a while but may give them and national courts confidence to enforce," Veal predicted. "I see interesting consequences of this judgment in the area of recommendations online. For example, recommender-powered platforms like Instagram and TikTok likely don't manually label users with their sexuality internally -- to do so would clearly require a tough legal basis under data protection law. They do, however, closely observe how users interact with the platform, and mathematically cluster together user profiles with certain types of content. Some of these clusters are clearly related to sexuality, and male users clustered around content that is aimed at gay men can be confidently assumed not to be straight. From this judgment, it can be argued that such cases would need a legal basis to process, which can only be refusable, explicit consent."

As well as VLOPs like Instagram and TikTok, he suggests a smaller platform like Twitter can't expect to escape such a requirement thanks to the CJEU's clarification of the non-narrow application of GDPR Article 9 -- since Twitter's use of algorithmic processing for features like so called 'top tweets' or other users it recommends to follow may entail processing similarly sensitive data (and it's not clear whether the platform explicitly asks users for consent before it does that processing).

"The DSA already allows individuals to opt for a non-profiling based recommender system but only applies to the largest platforms. Given that platform recommenders of this type inherently risk clustering users and content together in ways that reveal special categories, it seems arguably that this judgment reinforces the need for all platforms that run this risk to offer recommender systems not based on observing behaviour," he told TechCrunch.

In light of the CJEU cementing the view that sensitive inferences do fall under GDPR article 9, a recent attempt by TikTok to remove European users' ability to consent to its profiling -- by seeking to claim it has a legitimate interest to process the data -- looks like extremely wishful thinking given how much sensitive data TikTok's AIs and recommender systems are likely to be ingesting as they track usage and profile users.

TikTok's plan was fairly quickly pounced upon by European regulators, in any case. And last month -- following a warning from Italy's DPA -- it said it was 'pausing' the switch so the platform may have decided the legal writing is on the wall for a consentless approach to pushing algorithmic feeds.

Yet given Facebook/Meta has not (yet) been forced to pause its own trampling of the EU's legal framework around personal data processing such alacritous regulatory attention almost seems unfair. (Or unequal at least.) But it's a sign of what's finally -- inexorably -- coming down the pipe for all rights violators, whether they're long at it or just now attempting to chance their hand.

Sandboxes for headwinds

On another front, Google's (albeit) repeatedly delayed plan to depreciate support for behavioral tracking cookies in Chrome does appear more naturally aligned with the direction of regulatory travel in Europe.

Although question marks remain over whether the alternative ad targeting proposals it's cooking up (under close regulatory scrutiny in Europe) will pass a dual review process, factoring in competition and privacy oversight, or not. But, as Veal suggests, non-behavior based recommendations -- such as interest-based targeting via whitelisted topics -- may be less risky, at least from a privacy law point of view, than trying to cling to a business model that seeks to manipulate individuals on the sly, by spying on what they're doing online.

Here's Veal again: "Non-behaviour based recommendations based on specific explicit interests and factors, such as friendships or topics, are easier to handle, as individuals can either give permission for sensitive topics to be used, or could be considered to have made sensitive topics 'manifestly public' to the platform."

So what about Meta? Its strategy -- in the face of what senior execs have been forced to publicly admit, for some time now, are rising "regulatory headwinds" (euphemistic investor-speak which, in plainer English, signifies a total privacy compliance horrorshow) -- has been to elevate a high profile former regional politician, the ex U.K. deputy PM and MEP Nick Clegg, to be its president of global affairs in the hopes that sticking a familiar face at its top table, who makes metaverse 'jam tomorrow' jobs-creation promises, will persuade local lawmakers not to enforce their own laws against its business model.

But as the EU's top judges weigh in with more jurisprudence defending fundamental rights, Meta's business model looks very exposed, sitting on legally challenged grounds whose claimed justifications are surely on their last spin cycle before a long overdue rinsing kicks in, in the form of major GDPR enforcement -- even as its bet on Clegg's local fame/infamy scoring serious influence over EU policymaking always looked closer to cheap trolling than a solid, long-term strategy.

If Meta was hoping to buy itself yet more time to retool its adtech for privacy -- as Google claims to be doing with its Sandbox proposal -- it's left it exceptionally late to execute what would have to be a truly cleansing purge.