Adobe has introduced a new family of AI models named Firefly as part of its entry into the generative AI market.
It builds upon the generative AI technologies that Adobe announced in Photoshop, Express, and Lightroom at its annual Max conference the previous year, allowing users to create and edit objects, composites, and effects by just describing them. Adobe has hurried to keep up with the frenzy surrounding the technology, allowing contributors to sell AI-generated artwork in its content marketplace, for instance.
The first Firefly model is focused on the creation of images and text effects. A sample video from Adobe included a product demo for a “Generate Variations” option, for example. By highlighting an element in a multilayer work of art — a lighthouse, in the demo video — Adobe Firefly uses AI to generate different versions of the lighthouse.
Although amazing, AI-generated artwork and films have drawn intense criticism from artists. Critics claim that these sites essentially scrape artists’ own online creations and reformat them into “new” graphics without properly recognizing the original authors.
Firefly offers a single model meant to generate visuals and text effects from descriptions as of right now, though it is still in beta and without a set price (Adobe claims that will come). The model, which was created using billions of images, will soon be able to generate content for Adobe programmes including Express, Photoshop, Illustrator, and Adobe Experience Manager from a text prompt. (At the moment, using it requires going to a website.)
Adobe said Firefly will place an emphasis on giving creators “opportunities to benefit from your skills and creativity and protect your work.” The company already offers non-AI platforms that do that.