Industry News

Adobe Substance 3D’s AI features can turn text into backgrounds and textures



The new “Text to Texture” feature is now available in the Adobe Substance 3D Sampler 4.4 beta. | Image: Adobe


While many developers are working out how generative AI could be used to produce entire 3D objects from scratch, Adobe is already using its Firefly AI model to streamline existing 3D workflows. At the Game Developers Conference on Monday, Adobe debuted two new integrations for its Substance 3D design software suite that allow 3D artists to quickly produce creative assets for their projects using text descriptions.
The first is a “Text to Texture” feature for Substance 3D Sampler that Adobe says can generate “photorealistic or stylized textures” from prompt descriptions, such as scaled skin or woven fabric. These textures can then be applied directly to 3D models, sparing designers from needing to find appropriate reference materials.

The second feature is a new “Generative Background” tool for Substance 3D Stager. This allows designers to use text prompts to generate background images for objects they’re composing into 3D scenes. The clever thing here is that both of these features actually use 2D imaging technology, just like Adobe’s previous Firefly-powered tools in Photoshop and Illustrator. Firefly isn’t generating 3D models or files — instead, Substance is using 2D images produced from text descriptions and applying them in ways that appear 3D.

The new Text to Texture and Generative Background features are available in the beta versions of Substance 3D Sampler 4.4 and Stager 3.0, respectively. Adobe’s head of 3D and metaverse, Sébastien Deguy, told The Verge that both features are free during beta and have been trained on Adobe-owned assets, including reference materials produced by the company and licensed Adobe stock.