Sorry Netflix, a true crime documentary is not the place for AI imagery

 Screenshot from Netflix documentary What Jennifer Did.
Screenshot from Netflix documentary What Jennifer Did.

AI imagery is cropping up everywhere. In some cases it works, it some cases it's just ugly (see the world's first AI-generated romance movie), and it some cases it's just wildly inappropriate. Recent accusations against Netflix fall into that latter camp.

With one of the big concerns around AI image generation being the question of authenticity and fake news, a true crime documentary is really isn't the place to showcase the new technology. Yet viewers of Netflix's What Jennifer Did suspect they have spotted some fairly blatant examples of exactly that.

A screenshot from Netflix documentary What Jennifer Did
A screenshot from Netflix documentary What Jennifer Did

What Jennifer Did is a Netflix documentary about Jennifer Pan, who was convicted of hiring someone to kill her parents (her father was killed while her mother was injured but survived). At around 30 minutes in the programme, a description of Jennifer from a high school friend is accompanied by a series of images that we would assume to be photos from the time.

But looking more closely at the images, viewers have spotted what appear to be tell-tale signs of image manipulation that look suspiciously like the results of AI image generation. One photo showed Pan making the V for victory gesture, but she appears to have missing fingers and her thumb looks strangely long. In another suspect image Pan seems to have an elongated tooth, a deformed ear and a gap in her right cheek.

A screenshot from Netflix documentary What Jennifer Did
A screenshot from Netflix documentary What Jennifer Did

So did Netflix really just fabricate the images using AI? Some publications have suggested that the images may be AI-generated variations spawned from one original image of Pan. If that were the case, it would he hugely inappropriate, but I'm not convinced that's what happened.

At the time of writing, Netflix has not yet responded to the controversy, but I suspect documentary makers may have attempted to enhance old low-resolution images using AI-powered upscaling or photo restoration software to try to make them look clearer on a TV screen. The problem is that even the best AI software can only take a poor quality images so far, and such programs tend to over sharpen certain lines, resulting in strange artifacts.

Even if that's the case, I still think Netflix should have steered clear of it given the context and the controversy around AI imagery. It should at the very least have used a clarification saying that images had been AI upscaled or sharpened or denoised or whatever to avoid this kind of backlash. Any kind of manipulation of photos in a documentary is controversial because the whole point is to present things as they were. Sure, artists' representations have long been used in crime reporting, for example to show court proceedings when cameras are prohibited, but it's clear that that they're representations, and they should be labelled as such.

The Kate Middleton Photoshop debacle has demonstrated just how closely images in the media are liable to be picked apart, and we've also seen that audiences are on the alert and ready to call out AI imagery wherever it may appear: from dodgy scientific papers to Uber Eats and a Lego quiz. Faced with conspiracy theories galore as it is, Netflix really has to work to show it has standards.