klange changed the topic of #osdev to: Operating System Development || Don't ask to ask---just ask! || For 3+ LoC, use a pastebin (for example https://gist.github.com/) || Stats + Old logs: http://osdev-logs.qzx.com New Logs: https://libera.irclog.whitequark.org/osdev || Visit https://wiki.osdev.org and https://forum.osdev.org || Books: https://wiki.osdev.org/Books
simjnd has quit [Ping timeout: 276 seconds]
goliath has quit [Quit: SIGSEGV]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 245 seconds]
edr has quit [Quit: Leaving]
the_oz has joined #osdev
arminweigl_ has joined #osdev
arminweigl has quit [Ping timeout: 245 seconds]
arminweigl_ is now known as arminweigl
cultpony has quit [Ping timeout: 245 seconds]
simjnd has joined #osdev
cultpony has joined #osdev
simjnd has quit [Ping timeout: 260 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 248 seconds]
jcea has quit [Ping timeout: 276 seconds]
_whitelogger has joined #osdev
simjnd has joined #osdev
karenw has quit [Ping timeout: 265 seconds]
simjnd has quit [Ping timeout: 248 seconds]
_whitelogger has joined #osdev
simjnd has quit [Ping timeout: 252 seconds]
arminweigl has quit [Remote host closed the connection]
arminweigl has joined #osdev
Jari-- has joined #osdev
simjnd has joined #osdev
simjnd has quit [Ping timeout: 276 seconds]
* klys dies from falling portugal
simjnd has joined #osdev
simjnd has quit [Ping timeout: 248 seconds]
randm has quit [Remote host closed the connection]
randm has joined #osdev
simjnd has joined #osdev
simjnd has quit [Ping timeout: 260 seconds]
_whitelogger has joined #osdev
simjnd has joined #osdev
simjnd has quit [Ping timeout: 260 seconds]
simjnd has joined #osdev
simjnd has quit [Remote host closed the connection]
arminweigl has quit [Remote host closed the connection]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 248 seconds]
arminweigl has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
slow99 has quit [Quit: slow99]
slow99 has joined #osdev
xenos1984 has joined #osdev
simjnd has joined #osdev
jedesa has joined #osdev
simjnd has quit [Ping timeout: 276 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 260 seconds]
simjnd has joined #osdev
arminweigl has quit [Remote host closed the connection]
arminweigl has joined #osdev
arminweigl has quit [Remote host closed the connection]
arminweigl has joined #osdev
netbsduser` has joined #osdev
GeDaMo has joined #osdev
Gooberpatrol66 has quit [Ping timeout: 245 seconds]
Left_Turn has joined #osdev
simjnd has quit [Ping timeout: 252 seconds]
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 268 seconds]
jedesa has quit [Quit: jedesa]
simjnd has joined #osdev
Jari-- has quit [Ping timeout: 252 seconds]
simjnd has quit [Ping timeout: 245 seconds]
xvmt has quit [Ping timeout: 260 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 276 seconds]
nyah has quit [Remote host closed the connection]
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
goliath has joined #osdev
jcea has joined #osdev
simjnd has joined #osdev
simjnd has quit [Ping timeout: 248 seconds]
<zid> It's friday, I am realiably told we need to "get down", in fact, we "gotta"
<pog> it's friday, therefore i'm in loev
<pog> as the great philosopher robert smith says
<zid> is he one of The Smiths
<pog> ironically, no
<zid> Maybe he's an honourary one
jedesa has joined #osdev
simjnd has joined #osdev
Turn_Left has quit [Ping timeout: 252 seconds]
simjnd has quit [Ping timeout: 276 seconds]
Turn_Left has joined #osdev
<the_oz> it'
adder has quit [Ping timeout: 276 seconds]
<the_oz> s fry day fry day fun fun fun fun FUN FUN FUN FUN FUN AAAAAAAAAAAHHHHHHHHHHHHHHHHHHHH
adder has joined #osdev
* Ermine reads sles blurb
<Ermine> > Proven resiliency and rollback capabilities backed by btrfs filesystem
<Ermine> So basically one has to use btrfs?
<the_oz> That just means "I cribbed these features"
<the_oz> yours? ess mine nao
<zid> I think nikolar might have died
<heat> Ermine: SLES/SLED and even openSUSE are pretty heavily btrfspilled
<heat> for the system partition at least
<heat> for database workloads I think we recommend XFS
<Ermine> yea, opensuse installer proposes btrfs by default
<Ermine> fedora does that too btw
* Ermine will consider xfs though
simjnd has joined #osdev
edr has joined #osdev
<nortti> didn't suse use to default to xfs not too long ago, too?
simjnd has quit [Ping timeout: 260 seconds]
<heat> > With SUSE Linux Enterprise 12, Btrfs is the default file system for the operating system and XFS is the default for all other use cases
<heat> SLE12 dates back to 2014
<zid> SLE12 is really bad hackerspeak for SLER?
goliath has quit [Quit: SIGSEGV]
imyxh has quit [Ping timeout: 248 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 276 seconds]
imyxh has joined #osdev
karenw has joined #osdev
goliath has joined #osdev
imyxh has quit [Ping timeout: 252 seconds]
Turn_Left has quit [Read error: Connection reset by peer]
imyxh has joined #osdev
Turn_Left has joined #osdev
simjnd has joined #osdev
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 276 seconds]
simjnd has quit [Ping timeout: 244 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 248 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 248 seconds]
simjnd has joined #osdev
guideX_ has joined #osdev
guideX has quit [Ping timeout: 244 seconds]
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 252 seconds]
Gooberpatrol66 has joined #osdev
edr has quit [Ping timeout: 252 seconds]
xenos1984 has quit [Ping timeout: 268 seconds]
xenos1984 has joined #osdev
karenw has quit [Ping timeout: 252 seconds]
Teukka has quit [Read error: Connection reset by peer]
Teukka has joined #osdev
karenw has joined #osdev
xvmt has joined #osdev
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
xvmt has quit [Ping timeout: 276 seconds]
xvmt has joined #osdev
guideX_ is now known as guideX
xenos1984 has quit [Ping timeout: 248 seconds]
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
puck has quit [Remote host closed the connection]
bslsk05 has quit [Remote host closed the connection]
puck has joined #osdev
sbalmos has quit [Quit: WeeChat 4.6.3]
bslsk05 has joined #osdev
kpel has joined #osdev
Brnocrist has quit [Ping timeout: 252 seconds]
Brnocrist has joined #osdev
xenos1984 has joined #osdev
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
troseman has quit [Quit: troseman]
sbalmos has joined #osdev
troseman has joined #osdev
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 265 seconds]
simjnd has quit [Ping timeout: 260 seconds]
Turn_Left has joined #osdev
simjnd has joined #osdev
Left_Turn has quit [Ping timeout: 260 seconds]
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
simjnd has quit [Ping timeout: 276 seconds]
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 272 seconds]
<geist> FWIW i've been using btrfs almost exclusively for at least 5 or 6 years and love it
<geist> it has a lot of really useful features
simjnd has joined #osdev
edr has joined #osdev
simjnd has quit [Ping timeout: 260 seconds]
simjnd has joined #osdev
simjnd has quit [Ping timeout: 252 seconds]
netbsduser` has quit [Ping timeout: 252 seconds]
Left_Turn has quit [Read error: Connection reset by peer]
Left_Turn has joined #osdev
simjnd has joined #osdev
<the_oz> I wish zfs had more of a "here's a bunch of disks you figure it out and stop being retarded wrt slop and stop being so static"
<the_oz> *smacks donkey* NO BAD ROBOT
<clever> the_oz: if you dont need any redundancy, you can just throw disks at it, and get the combined space
<geist> yeah i've been futzing a lot with zfs lately too, and i think it's great, but there's a bit less flexibility in the way you set up vdevs and whatnot that's a bit annoying if you're just tinkering around
<geist> if you want redundancy that is
<geist> i'm currently using a 5 disk venv with raidz1 + a SSD acting as a cache device for my proxmox server, and it works well
<geist> that's a nice thing that zfs has, zvols that you can carve out of it and put a vm directly on top of, etc
<geist> also the large memory default of ARC is something to watch out for
<zid> combining disks like that is anti-redundancy though
<zid> as presumably any crashed out drive is also going to crash out a bunch of data it didn't have, if there's fragmentation, plus you then NEED To use recovery tools
<clever> the ARC will also discard data under memory pressure, so it shouldnt cause too many problems
Left_Turn has quit [Read error: Connection reset by peer]
<clever> zid: yeah, if you have any non-redundant vdevs in a pool, you can basically consider the entire pool lost when one non-redundant drive fails
Left_Turn has joined #osdev
<geist> right, the part that i wish they'd add the feature for is removing vdevs from a pool (provided it can move the space around)
<geist> within the vdev you can generally at least expand them, but once you add the vdev you're stuck with it
<geist> (except for cache, etc vdevs)
<clever> the limitation with the old raidz logic, is that if you use raidz1 to combine 3 drives, it basically just does a dumb stripe between all 3
<clever> so basically, `blocknr = byte_offset / 4096; diskNr = blocknr % 3;`
<geist> not sure you can re-stripe a raidz by adding new devices
<geist> as well
<clever> and then all records in the raidz1, are made up of 2 parts data, and 1 part parity
<geist> but these are generally tinkering around sort of things. if you're building a Real NAS, or doing something enterprise you would almost certainly plan ahead
<clever> but the starting blocknr, can be anything, so the starting diskNr can be any disk
<geist> i like how btrfs is a bit more flexible in this regard. the equivalent is roughly that every disk is a vdev, and the striping/raiding/etc is happenening on top of that
<clever> there is a new zfs feature for more flexible raid'ing, but ive not looked into its rules
<geist> the redundancy is implemented at the allocation level on top of the devies
<geist> and it allows rebalancing and removing devices
<clever> zfs does have something close to that, where it can store 2 or 3 copies of every record
<clever> and internally, a lot of critical metadata already does that
<clever> but i dont think it has any rule to prevent storing the 3 copies on the same disk
<clever> its more to guard against a bit-flip taking out a single-disk pool, because it hit a superblock type structure
<the_oz> clever geom concat yeah
Turn_Left has joined #osdev
<geist> yah that's why you usually run btrfs metdata slices at at least 2 dups (though you can run it higher, and dynamically decide to as well)
<clever> the other bit about raidz1, is that while the starting position may be based on the blocknr in 4kb blocks (specifically, the ashift size), the data is not striped in 4k chunks
<geist> i think the critical allocation tree is always duped across all devices becuas eit's part of a system slice
<clever> a 128kb record, has 128kb of data, and 64kb of parity, so it just cuts that data into a pair of 64kb halves
<clever> and then 64kb goes to each disk
<clever> so it doesnt have to interleave the disk io on read, it can just concat the 3 parts
<clever> also, if a record is too small, say 4kb or under, it sort of degrates into more of a mirror configuration
<clever> where its only storing 4kb + 4kb, across 2 disks
<bslsk05> ​docs.google.com: ZFS overhead calc.xlsx - Google Sheets
Left_Turn has quit [Ping timeout: 276 seconds]
<clever> ah, there it is, the new thing in zfs is called "draid"
<the_oz> if the new feature is the feature I'm thinking of it basically rotates stripes using math to 0 cost at the expense of even less flexibility wrt vdev setup
<the_oz> which you wouldn't mind if your hardware was all present like in a huge disk shelf datacenter build
<geist> huh that's interesting re: the overhead due to slop
<geist> since i have a 5 disk raidz1 it just so happens to have 0 overhead
<the_oz> blessed be the numbers
<geist> and yeah you're right that the way the stripe slice works on zfs doesn't seem like it's perfectly interleaved
<clever> there is another bit of overhead that is easy to miss, internally, zfs manages the entire vdev as an array of metaslabs, ext/2/3/4 does similar
<geist> i noticed when doign some long reads of stuff on zfs it seems like at any given point one of the disks would get hammered more than the others
<geist> which kinda implies it's almost a raid4 like thing
<clever> but with zfs, every metaslab in the disk must be identically sized
<geist> but i may have just been imaginging it
<clever> so if your disk isnt a multiple of the metaslab size, then you loose the tail end of it
<the_oz> it will work fine without complaining but knowing it's there is autismal
<the_oz> draid improves that wrt metadata which isn't rotated
<clever> there is also another thing ive not had a chance to play with yet, vdev removal
<the_oz> which would explain disk hammering thats because it is'
<clever> vdev removal, will basically create a block device within the pool, that cant allocate to the device being removed, then do a sort of mirror re-silvering, to copy all blocks from the removed disk to the internal block dev
<clever> and once done, flag that internal thing as not a target for allocations
<clever> so when any existing metadata says to read the removed disk, it will get re-directed back into zfs, and basically read a 2tb "file" stored on other vdevs
<clever> and as things get deleted, that becomes more and more sparse
<clever> the_oz: and yeah, according to the man page, draid uses a fixed stripe size, making the IO simpler, at the cost of larger blocks, and harming compression ratios
<clever> so if you setup a draid with 4 data disks and 1 parity disk, and 4kb blocks, then every allocation always has a minimum 16kb of data and 4kb of parity, end of story
<clever> so a file with 3 lines of text in it, still uses 16kb of disk space, plus parity
<the_oz> unless you specially allocate on nvme for small files
<the_oz> MAGIC
<clever> yeah, there are some flags to setup a second vdev for things like metadata and small files
<clever> and that could then be raidz or mirror
<the_oz> theres a foruim post somewhere
<the_oz> wendell something dr strangelove
<bslsk05> ​forum.level1techs.com: ZFS Metadata Special Device: Z - L1 Articles & Video-related - Level1Techs Forums
karenw has quit [Ping timeout: 252 seconds]
netbsduser` has joined #osdev
xenos1984 has quit [Read error: Connection reset by peer]
steelswords94361 has quit [Read error: Connection reset by peer]
steelswords94361 has joined #osdev
simjnd has quit [Ping timeout: 276 seconds]
netbsduser` has quit [Ping timeout: 248 seconds]
<geist> i have been using a SSD as a cache vdev to my spinny rust
<geist> and it works amazingly well. there's a lot of FUD about cache vdevs, but i think it was based on an era when you'd have just another spinny disk as a cache vdev
<geist> but if the cvdev is much faster then i think it works well
xenos1984 has joined #osdev
<geist> also it allows me to manually set the in memory ARC cache size much smaller (like 8GB vs 32GB)
<geist> and then since the cvdev is just L2 ARC it nicely spills over
<the_oz> cache heirarchy - definitely better than metadata on same vdev updating itself because it updated itself
<the_oz> I don't know if that's what it's doing but I HEAR YOU ACCESSING THE DISK EVERY 5 SECONDS
<geist> the only real thing is the cache vdev can only be read only on ZFS i believe. if you had a mirrored cache vdev seems like you could make it read/write but it doesn't have that facility i think
<geist> OTOH the cache vdevs can be added and removed at will
<geist> it just dumps the cache state if you remove it
<geist> oh here's an annoying thing: vdevs in zfs AFAIK can only be a full device, not a partition of another larger one
<geist> though i may be wrong there, but if you inspect a vdev it actually has a GPT and a few sub partitions (marked as sun solaris something)
<the_oz> nah it can be a partition
simjnd has joined #osdev
<geist> guess it then just ends up with nested partitions
<geist> maybe the GPT stuff is really just a protection thing to keep something else from trashing it
<geist> if it happens to be a raw disk
<the_oz> welllll that may be a sysinstall setup in which case it tends to have boot partitions alongside the main data partition especially if eli encrypted but it's more just the gpt partitions happen tohae whatever data in a (boot - shadow stack whatever it's called i forget - data)
<the_oz> ut it could easily be MBR or just whole disk straight through
simjnd has quit [Ping timeout: 276 seconds]
<the_oz> naturally I prefer whole disk no maxtter what muh safety they advise
pie_ has quit []
vancz has quit []
<the_oz> it is a concern assuming a moron admins my disks
pie_ has joined #osdev
vancz has joined #osdev
<geist> morans is bad
<the_oz> fucking it up is MY job
<the_oz> ask self which disk? which shelf? am I in the right building? let's just turn both off
<the_oz> I actually did that one time "don't turn off this" "oh ok is that this one? no that one? ok it has the red strip yes? ok turned it off""
<the_oz> a thousand people just screamed
<heat> sup geist
<heat> > FWIW i've been using btrfs almost exclusively for at least 5 or 6 years and love it
<heat> yeah just don't run out of disk space
<heat> Pro Tip
<heat> mjg: mofer do you use zfs
<heat> or just dead filesystems like reiserfs or something
<heat> custom written ffs implementation for linux
cloudowind has quit [Ping timeout: 248 seconds]
<geist> i've run out of disk space many times
<geist> i think that was an old issue that has been generally resolved
<mjg> i store files on the cloud
<mjg> ez
<heat> the cloud is webscale
<heat> you store files on the cloud and it scales right up
<geist> i have crappy upload bandwidth
<bslsk05> ​www.youtube.com: - YouTube
<mjg> run local cloud
<heat> technically kinda yes
<the_oz> no
<heat> cloud providers' geographical distribution and replication of files makes it so
<heat> it's kind of not really in a place
netbsduser` has joined #osdev
<heat> as much as in every place everywhere
<mjg> can you run out of space in a cloujd?
<the_oz> to google
<the_oz> "let me explain my cloud"
<heat> mjg: maybe but i wouldn't test it in case they're using btrfs
<the_oz> How does the law apply to cloud?
<heat> i would ask the cloud people
<the_oz> she was there right then lol
<mjg> back when CLOUD FUCKER was a new buzzword i got a fucking guy claiming the company is in fact a cloud
<mjg> cause some services are running on different machines
<mjg> like bro
<mjg> heat: so when are you impelmenting a NOVEL RPC method in onyx
<heat> great question!
<heat> maybe RPAL would be a killer feature that would spark onyx global domination
<mjg> literally one deployment away
<froggey> The Cloud is everywhere. It is all around us. Even now, in this very room. You can see it when you look out your window or when you turn on your television. You can feel it when you go to work... when you go to church... when you pay your taxes
<mjg> anyway RPAL is one more reason to not watch tiktok
<the_oz> RPCCall(RPCDev(RPCImplement()))
netbsduser` has quit [Ping timeout: 248 seconds]
<bslsk05> ​portal-2-pti.fandom.com <no title>
<kof673> its scales glisten in the bark of trees, its roar is heard in the wind, and its forked tongue strikes like.............<lightning zaps tree> </excalibur dragon>
Turn_Left has quit [Ping timeout: 245 seconds]
Turn_Left has joined #osdev
Turn_Left has quit [Read error: Connection reset by peer]
<bslsk05> ​i.imgur.com <no title>
Lucretia has quit [Read error: Connection reset by peer]
simjnd has joined #osdev