Unable to load image

Riffdiffusion, using stable diffusion to generate spectrograms of music

https://www.riffusion.com/about

The site keeps going down because they have a music generator and the guys working on it are just doing it for fun and they weren't ready for for the news to spread yet

This has some cool samples you can play and explains how it works:

https://www.riffusion.com/about

Here's people talking about how bad and compressed it sounds and why, but there are also people amazed at how good it sounds

https://news.ycombinator.com/item?id=34001908


Edit:

Can I run it locally?

https://github.com/hmartiro/riffusion-app

https://huggingface.co/riffusion/riffusion-model-v1/tree/main

The model is 15GB :marseyworried:


Here's one of the authors talking about it on orange site:

https://news.ycombinator.com/item?id=33999162

Other author here! This got a posted a little earlier than we intended so we didn't have our GPUs scaled up yet. Please hang on and try throughout the day!

Meanwhile, please read our about page http://riffusion.com/about

It’s all open source and the code lives at https://github.com/hmartiro/riffusion-app --> if you have a GPU you can run it yourself

This has been our hobby project for the past few months. Seeing the incredible results of stable diffusion, we were curious if we could fine tune the model to output spectrograms and then convert to audio clips. The answer to that was a resounding yes, and we became addicted to generating music from text prompts. There are existing works for generating audio or MIDI from text, but none as simple or general as fine tuning the image-based model. Taking it a step further, we made an interactive experience for generating looping audio from text prompts in real time. To do this we built a web app where you type in prompts like a jukebox, and audio clips are generated on the fly. To make the audio loop and transition smoothly, we implemented a pipeline that does img2img conditioning combined with latent space interpolation.

>if you have a GPU you can run it yourself

:#marseyparty: :#marseyrave: :#!marseyparty:

100
Jump in the discussion.

No email address required.

This is actually insane. Really creative use of the technology. SD conquers musicels too

:marseyjam:


https://i.rdrama.net/images/17187151446911044.webp https://i.rdrama.net/images/1735584487Pd3ql1pai5_mfA.webp https://i.rdrama.net/images/17177781034384797.webp

Jump in the discussion.

No email address required.

Applying SD to the spectrograms is genius. All other AI music generation that I've seen just uses MIDI, which A. requires someone to have transcoded the music already and B. doesn't capture timbre, which is equally as important as the actual notes played for all modern music. So all you could really do is make some fake, r-slurred-sounding Bach. This is a game changer.

Jump in the discussion.

No email address required.

Imagine an ai trained off samples of instruments and you can make new instrument, could have legs tbh.

Jump in the discussion.

No email address required.

with sufficient fabrication equipment connected to an AI you could have it designing better real instruments, then putting the players of those instruments out of a job too for the lulz

Jump in the discussion.

No email address required.

naah b-word it's always just like 4 chords or 4 notes or something

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.



Now playing: DK's Treehouse (DK64).mp3

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.