Jump in the discussion.

No email address required.

!commenters ive been thinking about this for awhile what are some good options for a massive library

Jump in the discussion.

No email address required.

Do you want something hands on or hands off?

I use Ceph which is an excellent clustered file system/storage system, but requires some amount of technical know-how.

If you want something hands off you probably want one of those 4-bay or 8-bay systems that are "batteries included" (not literally) and come with a slick web interface to manage it all.

Jump in the discussion.

No email address required.

Im pretty techie so that sounds good

Jump in the discussion.

No email address required.

I wouldn't go with Ceph unless you plan to do it for learning purposes, it's really meant for high availability applications.

You can setup ZFS on an Ubuntu LTS install and be off to the races on commodity hardware pretty easily. I recommend Allan Jude's book the FreeBSD Mastery: ZFS and Advanced ZFS (written for FreeBSD but it applies to Ubuntu).

If that's too much FreeNAS SCALE is a great way to dip your toes into running a NAS on consumer hardware.

If you want a more plug-n-play solution Synology is the premium option which is very rock solid but will cost you.

Jump in the discussion.

No email address required.

What would ZFS get me? My current setup is NFS with mdadm raid 1.

Jump in the discussion.

No email address required.

I mean I personally love the zfs send and recv snapshot functionality.

It allows you to do block level backups.

ie: when you go to use a regular backup program it will scan the entirety of your drive to determine what files changed, zfs already knows this because that's the entire point of snapshots so it just….sends the changes, no scanning! This ends up being an extremely efficient and fast way to handle backups.

Being a modern checksumming filesystem it also gives you "bitrot" or general corruption protection which mdadm won't. This property isn't to exciting until your drives die, then it's suddenly super useful.

There's other benefits that exist like transparent compression, built-in encryption or the general UX of tooling that are small but add up to a very enjoyable stack to use.

Jump in the discussion.

No email address required.

Does it let me configure redundancy like Raid1?

Jump in the discussion.

No email address required.

Yes, absolutely.

They use their own terminology as there's sometimes low level differences, but they're only improvement over traditional raid ime.

RAID1 would be equivalent to a "mirror"

RAID0 would be equivalent to a "stripe"

RAID10 would be equivalent to "striped mirrors"

It also has topologies like RAID5 / RAID6 in the form of raidz1, raidz2 etc.

One of the reasons for the good UX is that zfs controls all layers of the stack.

Everything from actual drive management to the filesystem level is handled by zfs. You don't have to interact with multiple different subsystems by different authors to achieve good results (eg RedHats Stratis system where mdadm is one part of many).

Jump in the discussion.

No email address required.

Hmm, I'm looking on setting up a remote back up of my current setup, I think I'll play with ZFS for that.

Jump in the discussion.

No email address required.

More comments

So i have maybe 20 tb of stuff, and i figure while i have something running ill plug in an old ryzen and a nvdia gpx card. I have enough spares except hdds to do that. I would use the card for maybe a locally hosted llm and maybe some other recreational ai stuff. Id love for one of those to be local and figure out all my files and memes and shit. Just a true organizer

!codecels any suggestions there?

Maybe i could train one on my likes on rdrama and filter out bad pings

Jump in the discussion.

No email address required.

>any suggestions there?

Install stable diffusion so you can jack off to huge titted anime sluts!

>Maybe i could train one on my likes on rdrama and filter out bad pings

You can train it on my pings :marseyembrace:

Jump in the discussion.

No email address required.

You could install Proxmox which has native ZFS support (it's just regular Debian underneath) and then pass through the GPU to a dedicated ML VM. (Nvidia cards drivers are picky on linux but there's lots of video guides for it I'm sure.)

Jump in the discussion.

No email address required.

zoz

Jump in the discussion.

No email address required.

zle

Jump in the discussion.

No email address required.

zozzle

Jump in the discussion.

No email address required.

It's designed to run on a server cluster (multiple servers) but you can run it on a single node just fine - I think there's even a special flag you can pass when configuring it to set some sensible defaults for single-node use (they officially say it's for testing only but that's because they're obsessed with high availability which obviously you don't care about with a single node). It's a "new generation" storage system so it doesn't need your drives to be the same size or anything.

If you really are only going to ever have one node you can also look into Btrfs or ZFS. I used ZFS a long time ago (like a decade ago I think lol), it's kinda like RAID but far more flexible with some other benefits (ie. rebuilding a failed drive only requires copying the data on that single drive, but all the data like a real RAID array, so is much faster and can be done online).

Jump in the discussion.

No email address required.

ZFS is an amazing piece of software, high recommended.

I cannot recommend btrfs to this day and I would probably recommend experimenting with bcachefs if you wanted to have fun.

Jump in the discussion.

No email address required.

tbh I'd just use LVM and be done with it. I messed around with ceph and gluster a while ago but it was getting unnecessarily complex just for some pirate storage

Jump in the discussion.

No email address required.

If you want something simpler on a single node you should at least use btrfs or zfs. Doesn't LVM only support basic RAID (as far as multi disk support goes)?

Jump in the discussion.

No email address required.

I don't bother with complicated RAID setups for this particular use case. I suppose if you've got 2 disks the same size then might as well do RAID 0 via mdadm or something but otherwise just add the disks together with LVM and done. If one disk fails you don't have redundancy, but they're only torrents so you can just reload the torrent files and redownload whatever data got lost.

I'm not really a fan of btrfs, it again seems overengineered. I tried it out for a while and having to use a separate command just to see how much space is free was annoying (doubly so that that free space was just an estimated amount). I've never tried zfs but it seems like a similar type of filesystem.

Surely these filesystems have their place in the world but imo not in this particular situation.

Jump in the discussion.

No email address required.

There's absolutely nothing complicated about the basic parity raid that both ZFS and btrfs offer lmao.

You're basically running with all the resilience of RAID0 but none of the performance benefits, all because you can't figure out something fairly simple. In fact most operational tasks are going to be simpler with ZFS than with mdadm - for example, if a disk fails in an mdadm RAID5, you need to replace it with another disk of the same size for a rebuild to proceed. But with "raidz1" (basically ZFS's version of RAID5), you don't need that - it'll actually just automatically rebuild the "array" with one fewer disk (as long as you haven't filled it up so there's room to do so). You can also buy disks at different times with different sizes with no issues.

Jump in the discussion.

No email address required.

Ceph? I didn't realize you were a turbo nerd, that's some serious cred.

Jump in the discussion.

No email address required.

Get a big butt external hard drive if you aren't a massive cute twink.

Trans lives matter

Jump in the discussion.

No email address required.

fmovies

Jump in the discussion.

No email address required.

Link copied to clipboard
Action successful!
Error, please refresh the page and try again.