<bslsk05_>
lore.kernel.org: Making sure you're not a bot!
<nikolar>
huh
<nikolar>
fun
santo_ has quit [Ping timeout: 252 seconds]
sprock has quit [Ping timeout: 272 seconds]
sprock has joined #osdev
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
simjnd has quit [Ping timeout: 276 seconds]
Ermine has quit [Ping timeout: 248 seconds]
Ermine has joined #osdev
<geist>
huh i wonder if there's an actual jump there, or it's just 'we ran out of models'
<geist>
i wouldn't be surprised if it wasn't the latter, since iirc they've been basically family 6 model <8 bit number> forever
Left_Turn has joined #osdev
simjnd has joined #osdev
<nikolar>
oh geist, you asked about using partitions with zfs
<nikolar>
you can
Turn_Left has quit [Ping timeout: 276 seconds]
<nikolar>
it doesn't care, you can give it regular files, which is handy if you want to test stuff and you don't have spare drives
<geist>
yeah makes sense. it internally creats its own 2 GPT partitions, but i guess that's mostly as a protection mechanism, it probably has its own way of finding its own metdata
<nikolar>
yeah, and it needs a bunch of metadata to assembled the pools in the proper hierarchy (like raid10 type of stuff)
<nikolar>
it's ridiculously flexible
<heat>
zfs? never heard of her
<heat>
i use btrfs
<nikolar>
heat: can btrfs handle it's drive disappearing and reappearing without affecting software that's trying to access
<heat>
i don't know
<nikolar>
zfs can
<heat>
i'll try opening up and unscrewing my NVMe
<heat>
and i'll tell you the result
<nikolar>
do that
<heat>
zfs can if you have any sort of raid yeah?
<nikolar>
even if you don't have raid, it will just block reads
<nikolar>
and when the drive is back, you tell it to keep going
<nikolar>
and it just resumes normally
<heat>
holy shit that's the stupidest thing i've ever heard
<CompanionCube>
there are exactly two differences when giving zfs a whole disk: the vestigial 8MB 'solaris reserved' partition and the 'whole_disk_ flag in the pool labels.
<nikolar>
heat: doubt it, but go on
<heat>
go on what?
<nikolar>
why is it the stupidest thing
<heat>
why would you ever want reads to block indefinitely
<nikolar>
so they can resume?
<nikolar>
without data loss?
<nikolar>
you can tell it to fail the calls if you want
<heat>
... you removed your storage device
<heat>
what exactly are you expecting
<nikolar>
well i did run into an issue where this was very helpful
<nikolar>
i was copying stuff from the old m2 drive in a usb enclosure
<nikolar>
which was very temperamental for some reason
simjnd has quit [Ping timeout: 260 seconds]
<nikolar>
and it would stall or disconnect after a bit
<nikolar>
i just plug it back in and tell zfs to go on
<CompanionCube>
the downside is if it doesn't want to clear, you are stuck rebooting unless they got around to adding the equivalent of -f for export, iirc they haven't?
<nikolar>
I couldn't find it if they did
<nikolar>
Didn't try too hard though
jcea has joined #osdev
simjnd has joined #osdev
<clever>
nikolar: run something like `zdb -ull /dev/nvme0n1p1` to see the zfs metadata in a partition
<clever>
the main thing is a tree of key/value pairs, that describe that block dev, and the siblings in the vdev
<nikolar>
yup, fun stuff
<nikolar>
it's btrees all the way down
<clever>
the slightly weird thing, is that there is enough reserved space in zfs, to overlap it with MBR tables and even store the grub stage1 binary
<clever>
but the linux tooling doesnt do that, and instead will create some gpt tables, if you give it a raw block dev
<nikolar>
oh that's why
<heat>
zfs truly is terrible
<heat>
i have to congratulate sun microsystems
<heat>
i didn't know they could make something i so intensely unenjoy
<heat>
java is a masterpiece compared to zfs
<nikolar>
why do you unenjoy zfs but enjoy btrfs
<nikolar>
(which is way iffier)
<heat>
i don't specifically enjoy btrfs but i tolerate it, because it does work, and is integrated with normal storage stack stuff
<nikolar>
well zfs does work too
<nikolar>
pretty well actually
<heat>
zfs is a special boy in every regard
<nikolar>
sure
<heat>
it's super foreign to linux, and still foreign to solaris
<heat>
and foreign to freebsd
<heat>
zfs is basically what if someone decided to override everyone else and try to get a promotion
<heat>
by writing a whole storage solution from top to bottom
<nikolar>
and it still works better than btrfs
<heat>
i'm surprised zfs doesn't require special drivers
<nikolar>
what do you mean
<heat>
it's the only special thing it's missing
<heat>
the rest of the stack is fully custom
<heat>
and, to be fucking clear, btrfs is guilty of this too
<nikolar>
your point?
<nikolar>
there we goo
<heat>
but not at the same level
<heat>
i also hate on btrfs
<heat>
i use ext4 remember
<nikolar>
zfs gets a pass from me because it actually works in multidrive pools
<nikolar>
reliable
<nikolar>
*reliably
<nikolar>
unlike btrfs
<nikolar>
heat: do you use ext4 on your work computer
<heat>
no i use btrfs because it's the tumbleweed default
<heat>
i don't use any btrfs feature, at all
<heat>
i can say that it seems to work
<nikolar>
not even compression?
<heat>
nope
<nikolar>
free storage though
<heat>
i think zypper does auto snapshots? but i don't even know how to restore a snapshot
<nikolar>
you should turn it on, it will just compress every new modification, while the old stuff stays as is
<nikolar>
heat: you'll learn when you need to lol
<heat>
do i want to compress everything though?
<heat>
it's not unusual for compression to have horrible drawbacks
<nikolar>
it doesn't compress everything
<nikolar>
like if the first block it tries to write compresses terribly, it will just write it as is
<nikolar>
or something like that
<nikolar>
and i am not aware of horrible drawbacks
<heat>
for ext4 i know encryption disables all sorts of IO fast paths
<heat>
like O_DIRECT will never work
<nikolar>
how much do you care about that on your pc
<heat>
i don't care about direct IO generally
<heat>
but i do like speed
<nikolar>
compression should improve speeed then, in theory
<nikolar>
because writing to drives is slow, so the less you have to write, the faster it goes
<heat>
i have an nvme
<heat>
it is very fast
<mjg>
unless you are bottlenekced by compression
<nikolar>
still way slower than ram
<mjg>
you may ask: how is that even possible
<nikolar>
mjg: just choose a fast compression algo
<nikolar>
any compression is a win
<mjg>
and this is where i can point you at HORRID AF zstd compression in linux
<mjg>
erm, zfs
<nikolar>
lol
<heat>
storage being CPU bound is definitely real for certain setups
<mjg>
which is basically self-induced
<mjg>
the people who wrote it are total webdevs
<heat>
TOTAL WEBDEVS MON
goliath has quit [Quit: SIGSEGV]
<clever>
nikolar: every vdev member in zfs, has 4 labels, 256kb each, 2 at the head, and 2 at the tail
<nikolar>
ye
<clever>
there is also a 3.5mb "boot block" at the head, after the 2 labels (offset 512kb), where grub stage1 could live
Turn_Left has joined #osdev
<clever>
and the first 8k of each label is reserved as the "blank space", where MBR tables could live
<nikolar>
does anything use those
<clever>
nothing that i know of
<nikolar>
nice
<clever>
i'm not sure how well gpt would play nicely with that
<clever>
and then you have the 2tb problem of mbr
Left_Turn has quit [Ping timeout: 276 seconds]
<heat>
mjg: have you noticed that on linux zfs the ARC still exists
<heat>
and it uses a fucking shrinker
<heat>
total webdev mon
<heat>
that feeling when you carefully set up your vm.swappiness to fine tune your VM performance
<heat>
and zfs.ko just shits all over it
<clever>
heat: i did just have OOM killing steam, and leaving fat 7gig firefox children laying around
<clever>
but the ARC also shrank down to 2gig in response
<clever>
so it gave me all it could
Turn_Left has quit [Read error: Connection reset by peer]
Lucretia has quit [Remote host closed the connection]
rom4ik has quit [Quit: Ping timeout (120 seconds)]
rom4ik has joined #osdev
Gooberpatrol_66 has joined #osdev
edr has quit [Ping timeout: 244 seconds]
Gooberpatrol66 has quit [Ping timeout: 245 seconds]