guideX has quit [Read error: Connection reset by peer]
Turn_Left has quit [Remote host closed the connection]
Turn_Left has joined #osdev
steelswords94361 has quit [Read error: Connection reset by peer]
getz has quit [Quit: metastasizing...]
getz has joined #osdev
goliath has joined #osdev
jcea has quit [Ping timeout: 276 seconds]
Left_Turn has joined #osdev
Turn_Left has quit [Ping timeout: 265 seconds]
Turn_Left has joined #osdev
Left_Turn has quit [Ping timeout: 244 seconds]
dude12312414 has joined #osdev
tjf has quit [Quit: l8r]
tjf has joined #osdev
carbonfiber has joined #osdev
guideX has joined #osdev
netbsduser has quit [Read error: Connection reset by peer]
cross has quit [Remote host closed the connection]
kata has joined #osdev
kata has quit [Client Quit]
kata has joined #osdev
kata has quit [Remote host closed the connection]
kata has joined #osdev
GeDaMo has quit [Quit: 0wt 0f v0w3ls.]
<geist>
sure. you should be able to create an account and edit it
<geist>
we dont really run osdev.org here, just tends to be associated with it
kata has quit [Client Quit]
kata has joined #osdev
nur has quit [Quit: Leaving]
<_Heat>
i'm 0.5s away from linux on a make -j16
<_Heat>
though i'm on a VM vs bare hardware
<_Heat>
is this a win? I think it's a win
<geist>
you believe what you need to believe
<_Heat>
i believe this is good yeah
<_Heat>
at this scale you start to see LRU scaling problems and page cache scaling problems
<geist>
yah
<geist>
how many cores are you running in this case?
<geist>
since yo have a VM it's 'fun' to start adding cores to your guest and then see where it starts to badly scale
<_Heat>
16 qemu cores
<_Heat>
but only ~12 of them are real
<geist>
ah yeah, running more than the host has is probably not good
<_Heat>
i technically have 24 threads but yeah
<_Heat>
if i had gone for the 9950X i would have 16 cores 32 threads, which i slightly regret
<_Heat>
(not going for)
<_Heat>
so i added LRU batching and removed the page cache spinlock, because it really wasn't needed there
<_Heat>
LRU batching is a huge complication on top of what I had, but it is what it is
<_Heat>
i also found a stupid problem in my scheduler where, while scheduling a thread for the first time, i never actually tried to IPI the core for a resched
<_Heat>
which made my 3s builds into 40
<_Heat>
because scheduling a job had to wait for a full idle thread quota (~10ms)
<geist>
oh yeah that'll do it
<_Heat>
then just a lot of TLB IPI + LRU batching got me down to 2.3, linux on bare metal does 1.9
<_Heat>
but this is decent
<_Heat>
i was expecting more dcache contention from a build, but i guess not
<geist>
you should test linux in the same qemu environment to be fair
<geist>
though depending on how much assist you're utilizing from the KVM bits you might see differences there
<geist>
iirc one of the more modern KVM feature sets that you can access is some sort of TLB shootdown assist via vmcalls
<_Heat>
yeah I'm not
<_Heat>
testing linux under QEMU would only give me an advantage
<_Heat>
(well I assume they use all fancy KVM features, so maybe not)
<geist>
right that's what i mean
carbonfiber has quit [Quit: Connection closed for inactivity]
<_Heat>
hmm, doesn't seem like there's a TLB shoodown assist