Jump in the discussion.

No email address required.

It doesn't even work on modern training processes:

Your screenies don't support that claim (it's actually pretty difficult to test):

Describing an image is not the same as learning to draw an image.

The point of nightshade/glaze is to prevent an image from being useful for training of the next version of SD or Dall-E or whatever.

Jump in the discussion.

No email address required.

it's actually pretty difficult to test

Not really. Furries did it, and the main dev coped about it on twitter saying that it's not possible to test nightshade locally since they had to use A100 clusters to do so.

Which is completely r-slurred if you think about it since that would mean local training is somehow immune :marseyxd:

Jump in the discussion.

No email address required.

Uh bro, training a model like SD does indeed require something akin to an A100 cluster.

Furries did it,

did what exactly?

Jump in the discussion.

No email address required.

training a model like SD does indeed require something akin to an A100 cluster

Pretraining maybe, fine-tuning certainly not. It's perfectly doable locally on 3090s, and it's much cheaper than renting clusters.

did what?

Finetuned base SD1.5 (the model most exposed to the "poison") with a poisoned dataset for a significant number of steps, with very little to show in favor of nightshade.

Jump in the discussion.

No email address required.

I agree insofar that these kinds of obfuscation methods are not useful for preventing people from recreating a specific artist's images, unless perhaps she has a very unique style. If the original dataset (used for training the base model) already contained images with the elements necessary to recreate a given image, obfuscating that image (in a way that leaves it recognizable for humans) won't prevent the base model from being able to recreate it.

Pretraining

if that's what you want to call the first 99.9% of training...

Finetuned base SD1.5 (the model most exposed to the "poison") with a poisoned dataset for a significant number of steps, with very little to show in favor of nightshade.

do you have a link?

Jump in the discussion.

No email address required.

if that's what you want to call the first 99.9% of training...

Which then no one uses since base SD1.5 is garbage at anything people actually want it to generate (realistic/anime/furry porn), which needs significant finetuning. Pretraining is always left to researchers who get clusters since weight initializing is the boring part of ML.

do you have a link?

It's all in a groomercord thread on the furry diffusion server, they mostly wanted to know if there was a need to filter and counter it.

Jump in the discussion.

No email address required.

figuring out how to build an algorithm that can create images? boring!

figuring out how to make the algorithm accessible to noobs without them needing to understand any part of it, just by showing the algorithm their favorite fetish pics? also boring!

the only interesting part of ML is being a noob with fetish pics, showing those fetish pics to the algorithm, and then getting even more fetish pics.

:marseyrofl:

Jump in the discussion.

No email address required.

the only interesting part of ML is being a noob with fetish pics, showing those fetish pics to the algorithm, and then getting even more fetish pics.

This but unironically, 99% of people only care about this part

Jump in the discussion.

No email address required.

figuring out how to build an algorithm that can create images? boring!

Who are you quoting?

Jump in the discussion.

No email address required.

base SD1.5 is garbage at anything except of course without it you wouldn't be able to do anything at all lol.

"finetuning" is just nudging a fully finished model in a desired direction.

It's all in a groomercord thread on the furry diffusion server, they mostly wanted to know if there was a need to filter and counter it.

So it's probably safe to assume they don't know what they're doing or what they're testing?

Jump in the discussion.

No email address required.

"finetuning" is just nudging a fully finished model in a desired direction.

Completely r-slurred. SD 1.5 is a eps-prediction NLP model trained on 512x512 images, current models are v-prediction tag-based models trained with aspect ration bucketing on 1088 base resolution. There's a significant difference between them both in terms of training time and underlying training tech.

Not even mentioning the garbage dataset used for base SD1.5 training, looking at LAION for a split second will tell you that.

So it's probably safe to assume they don't know what they're doing or what they're testing?

They are the main group of trainers, most current local models come from them, so they definitely know way more than these grifters.

Jump in the discussion.

No email address required.

current models are v-prediction tag-based models trained with aspect ration bucketing on 1088 base resolution.

those "current models", were they built from scratch or on top of some existing model that cost a couple million dollars to train?

Jump in the discussion.

No email address required.

More comments
Link copied to clipboard
Action successful!
Error, please refresh the page and try again.