This tool could save your photos from AI (for now)

 PhotoGuard image protector against AI image generators
PhotoGuard image protector against AI image generators

The rapid explosion of AI image generators and editors has raised wide-ranging concerns, from copyright to the impact on creative jobs. But even outside of the creative sector, the general public may start to fear what could happen now that anyone can find images of them online and potentially doctor them using AI.

Even watermarking images can do little to protect them from manipulation now that there are even AI watermark removers. But while AI image generators are proliferating, so too are the potential solutions. The research institute MIT CSAIL is the latest to announce a potential solution: a tool called PhotoGuard (see our pick of the best AI art generators to learn more about the expanding tech).

PhotoGuard seems to work in a similar way to Glaze, which we've mentioned before. An initial encoder process subtly alters an image by changing select pixels in a way that interferes with AI models' ability to understand what the image shows. The changes are invisible to the human eye but are picked up by AI models, affecting the algorithmic model's latent representation of the target image (the mathematics detailing the position and colour of each pixel. Effectively, these tiny alterations "immunise" an image by preventing an AI from understanding what it is looking at.

PhotoGuard process to protect images from AI manipulation
PhotoGuard process to protect images from AI manipulation

After that, a more advanced diffusion method camouflages an image as something else in the eyes of the AI by optimising the "perturbations" it applies in order to resemble a particular target. This means that when the AI tries to edit the image, the edits are applied to the "fake" target" image instead, resulting in output that looks unrealistic.

As we've noted before, however, this isn't a permanent solution. The process could be reverse-engineered, allowing the development of AI models immune to the tool's interference.

MIT doctorate student Hadi Salman, the lead author of the PhotoGuard research paper, said: "While I am glad to contribute towards this solution, much work is needed to make this protection practical. Companies that develop these models need to invest in engineering robust immunizations against the possible threats posed by these AI tools."

He called for a collaborative approach involving model developers, social media platforms and policymakers to defend against unauthorized image manipulation. "Working on this pressing issue is of paramount importance today,” he said. PhotoGuard's code is available on GitHub. See our pick of the best AI art tutorials to learn more about how AI tools can be used (constructively).