Adobe’s latest AI experiment generates music from text

This week, Adobe revealed an experimental audio AI tool to join its image-based ones in Photoshop. Described by the company as “an early-stage generative AI music generation and editing tool,” Adobe’s Project Music GenAI Control can create music (and other audio) from text prompts, which it can then fine-tune in the same interface.

Adobe frames the Firefly-based technology as a creative ally that — unlike generative audio experiments like Google’s MusicLM — goes a step further and skips the hassle of moving the output to external apps like Pro Tools, Logic Pro or GarageBand for editing. “Instead of manually cutting existing music to make intros, outros, and background audio, Project Music GenAI Control could help users to create exactly the pieces they need—solving workflow pain points end-to-end,” Adobe wrote in an announcement blog post.

The company suggests starting with text inputs like “powerful rock,” “happy dance” or “sad jazz” as a foundation. From there, you can enter more prompts to adjust its tempo, structure and repetition, increase its intensity, extend its length, remix entire sections or create loops. The company says it can even transform audio based on a reference melody.

Adobe says the resulting music is safe for commercial use. It’s also integrating its Content Credentials (“nutrition labels” for generated content), an attempt to be transparent about your masterpiece’s AI-assisted nature.

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music,” Adobe Research scientist Nicholas Bryan wrote.

The project is a collaboration with the University of California, San Diego and the School of Computer Science, Carnegie Mellon University. Adobe’s announcement emphasized Project Music GenAI Control’s experimental nature. (It didn’t reveal much of its interface in the video above, suggesting it may not have a consumer-facing UI yet.) So you may have to wait a while before the feature (presumably) makes its way into Adobe’s Creative Cloud suite.

This article originally appeared on Engadget at https://www.engadget.com/adobes-latest-ai-experiment-generates-music-from-text-184019169.html?src=rss

2 thoughts on

Adobe’s latest AI experiment generates music from text

  • Sarina Tromp

    This new AI tool from Adobe sounds like a game-changer for music production! I wonder how it could be used to enhance the soundtracks of competitive games like FPS or MOBAs. Do you think it could revolutionize the way we create gaming soundtracks? I’d love to hear your thoughts on this new technology!

    • Marlon Douglas

      @Sarina Tromp, I couldn’t agree more about the game-changing potential of Adobe’s new AI tool for gaming soundtracks. The idea of customizing music on the fly to match gameplay or player mood is revolutionary. This innovation could elevate immersion in MMOs and party games by setting the perfect tone and atmosphere. It’s a thrilling advancement at the intersection of music production and gaming!

Leave a Reply

Your email address will not be published. Required fields are marked *

Join the Underground

a vibrant community where every pixel can be the difference between victory and defeat.

Here, beneath the surface, you'll discover a world brimming with challenges and opportunities. Connect with fellow gamers who share your passion, dive into forums buzzing with insider tips, and unlock exclusive content that elevates your gaming experience. The Underground isn't just a place—it's your new battleground. Are you ready to leave your mark? Join us now and transform your gaming journey into a saga of triumphs.