I tried Adobe's Firefly 3 image generation tool — it takes photorealism to a new level

 AI generated images of people using Adobe Firefly 3.
AI generated images of people using Adobe Firefly 3.

Adobe has released the latest version of its artificial intelligence image generation model Firefly 3, as well as upgrades to generative fill in Photoshop.

The upgrades to Firefly bring significant improvements in photorealism, prompt adherence and overall control over the final image.

Firefly 3 was trained on billions of licensed stock images, including more detailed labelling for lighting, structure and style, to improve the overall output. It is initially available through the Firefly website and will roll out to other Adobe products in the future, including alongside AI video models like Sora in Premiere Pro.

In addition to generative fill, Adobe is launching generative expand for the Firefly web app. This feature allows users to expand the canvas or change the orientation of any image, and Firefly fills in the gaps.

What is Adobe Firefly 3?

AI image of an elderly couple on the beach at sunset
AI image of an elderly couple on the beach at sunset

Adobe has been rapidly improving its Firefly family of transformer models, adding new features such as style reference and integrating them with existing Adobe products.

In just over a year, Firefly has become the image generation tool of choice used by millions of creators to ideate every day, and we’re just getting started.

Firefly 3 was trained on licensed Adobe Stock images — which includes some creations from Midjourney — and will likely be the first version to have video capabilities later this year.

Most people will interact with Firefly through generative fill in Photoshop or template and other content creation in Illustrator or InDesign, but Adobe is investing in its Firefly web app.

The text-to-image model can also use an existing image and copy its style or structure. For example, if you have a photograph with perfect lighting, you could copy that style, or if you have a product image, you could copy the structure but change the content.

“In just over a year, Firefly has become the image generation tool of choice used by millions of creators to ideate every day, and we’re just getting started,” said Ely Greenfield, chief technology officer, Digital Media at Adobe.

“As we continue to advance the state of the art with Image 3 Foundation Model, we cannot wait to see how our creative community will push the bounds of what’s possible with this beta build.”

How well does Adobe Firefly 3 work?

AI image of a castle
AI image of a castle

I tried it on a handful of prompts and compared them to Firefly 2, which was already impressive at things like art and design-based creation. Firefly 3 is a major improvement in photorealistic images. I think it will hit the stock image sector.

Adobe says the work on Firefly 3 was focused on speeding up ideation, allowing designers to go from an idea to a fully-fledged image in as little time and with as little friction as possible.

I think they’ve achieved it. Unlike Midjourney, where you have to learn multiple parameters and how to implement them, Firefly has a series of well-defined and clear menu options.

Firefly 3 seems to have better photorealism, a wider variety of outputs from a single prompt across styles like photo, art and illustration, as well as options to set the mood or lighting.

The company says it also has a better understanding of the prompt, which I put to the test by asking it to show me an old castle by a lake with people boating — and it was spot on. This means you should avoid relatively vague prompts and get the details correct.

More from Tom's Guide