Unable to load image

Riffdiffusion, using stable diffusion to generate spectrograms of music

https://www.riffusion.com/about

The site keeps going down because they have a music generator and the guys working on it are just doing it for fun and they weren't ready for for the news to spread yet

This has some cool samples you can play and explains how it works:

https://www.riffusion.com/about

Here's people talking about how bad and compressed it sounds and why, but there are also people amazed at how good it sounds

https://news.ycombinator.com/item?id=34001908


Edit:

Can I run it locally?

https://github.com/hmartiro/riffusion-app

https://huggingface.co/riffusion/riffusion-model-v1/tree/main

The model is 15GB :marseyworried:


Here's one of the authors talking about it on orange site:

https://news.ycombinator.com/item?id=33999162

Other author here! This got a posted a little earlier than we intended so we didn't have our GPUs scaled up yet. Please hang on and try throughout the day!

Meanwhile, please read our about page http://riffusion.com/about

It’s all open source and the code lives at https://github.com/hmartiro/riffusion-app --> if you have a GPU you can run it yourself

This has been our hobby project for the past few months. Seeing the incredible results of stable diffusion, we were curious if we could fine tune the model to output spectrograms and then convert to audio clips. The answer to that was a resounding yes, and we became addicted to generating music from text prompts. There are existing works for generating audio or MIDI from text, but none as simple or general as fine tuning the image-based model. Taking it a step further, we made an interactive experience for generating looping audio from text prompts in real time. To do this we built a web app where you type in prompts like a jukebox, and audio clips are generated on the fly. To make the audio loop and transition smoothly, we implemented a pipeline that does img2img conditioning combined with latent space interpolation.

>if you have a GPU you can run it yourself

:#marseyparty: :#marseyrave: :#!marseyparty:

100
Jump in the discussion.

No email address required.

This is actually insane. Really creative use of the technology. SD conquers musicels too

:marseyjam:


https://i.rdrama.net/images/17092367509484937.webp https://i.rdrama.net/images/17093267613293715.webp https://i.rdrama.net/images/1711210096745272.webp

Jump in the discussion.

No email address required.

Applying SD to the spectrograms is genius. All other AI music generation that I've seen just uses MIDI, which A. requires someone to have transcoded the music already and B. doesn't capture timbre, which is equally as important as the actual notes played for all modern music. So all you could really do is make some fake, r-slurred-sounding Bach. This is a game changer.

Jump in the discussion.

No email address required.

Imagine an ai trained off samples of instruments and you can make new instrument, could have legs tbh.

Jump in the discussion.

No email address required.

with sufficient fabrication equipment connected to an AI you could have it designing better real instruments, then putting the players of those instruments out of a job too for the lulz

Jump in the discussion.

No email address required.

naah b-word it's always just like 4 chords or 4 notes or something

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.

:#marseywtf2:

I always thought spectrograms even on 4k screen were low resolution depictions of real thing. The fact that it can get this sound is amazing.

Jump in the discussion.

No email address required.

Oh no now my Em - G - C - D songs won’t be creative anymore!!!!

Jump in the discussion.

No email address required.

I've generated a song now - how do you actually play this?

![](/images/16711390344683673.webp)

Jump in the discussion.

No email address required.

Check out our audio processing code here

https://github.com/hmartiro/riffusion-inference/blob/main/riffusion/audio.py#L15

Good luck. I wanna hear it

Jump in the discussion.

No email address required.

I can't wait for Gorillaz 2.0, where the band's likeness and sound are fully AI generated, and to pirate it.

Jump in the discussion.

No email address required.

I wonder how many artists blowing their buttholes out over AI putting them out of jobs they never had pirate things.

Jump in the discussion.

No email address required.

![](/images/16713796758869505.webp)

![](/images/16713797139442353.webp)

Jump in the discussion.

No email address required.

Me loving that zozbot

Work that cat

Babaaaay

Jump in the discussion.

No email address required.

Does this have potential military applications?

Jump in the discussion.

No email address required.

The military ai is probably able to actually turn frogs gay if it chooses. Maybe if us straggots are lucky it'll zap us


:#marseytwerking:

:marseycoin::marseycoin::marseycoin:
Jump in the discussion.

No email address required.

God I wish.

Jump in the discussion.

No email address required.

Endless permutations of All Star to torture stochastic terrorist with

Jump in the discussion.

No email address required.

Darn, now if only i didn't have a 1070 :marseysulk:

Jump in the discussion.

No email address required.

Why am I supposed to be soying all over this?

Jump in the discussion.

No email address required.

STABLE DIFFUSION AS A MUSICAL INTRSUMENT :marseyexcited:

Jump in the discussion.

No email address required.

it's neat but really slow

Jump in the discussion.

No email address required.

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.