<discocaml>
<diligentclerk> There's the janestreet-bleeding-edge repo, maybe you mean something else by that
<discocaml>
<diligentclerk> The documentation certainly makes it sound like there are internal versions of Base and Core already that use fork-specific language features.
<dh`>
I realize there's been a long-term and useful working relationship
<dh`>
announcing a fork like this seems to me like it would be a preparatory move to pulling the chain on that relationship.
<dh`>
but maybe not, no need to assume ill intent
<dh`>
I'm more concerned about the fearless concurrency nonsense
<companion_cube>
well currently the OCaml 5 concurrency landscape doesn't look super great
<companion_cube>
something that would allow us to have a nice work stealing scheduler without the race conditions sounds.. nice?
<companion_cube>
I'm mostly worried about the complexity
Frostillicus has joined #ocaml
Frostillicus has quit [Ping timeout: 252 seconds]
<discocaml>
<froyo> it's a fact. they spoke of using e.g. modes and templates as annotations in existent versions already as they were developing the features. you can see it today in their sources. upstream ocaml conveniently ignores annots it doesn't understand so it all works out at least as far as compilation goes.
<dh`>
well
<dh`>
the problem is the whole "fearless concurrency" idea is more or less snake oil
<dh`>
avoiding data races is the first 0% of writing correct concurrent software
Haudegen has quit [Quit: Bin weg.]
<companion_cube>
it's at least the first 25%, come on
<companion_cube>
it's not an end-it-all but it's pretty good already
<dh`>
not really
<dh`>
data races are a pretty useless criterion
<dh`>
they became popular among academics because they're easy to label
<dh`>
suppose you have an incorrect concurrent program. now change it so it takes a global lock around every memory access. you haven't changed the program semantics, so it's still broken. but no more data races!
<dh`>
real concurrency bugs are about inconsistent accesses to multiple things
<companion_cube>
oh we've had this discussion before
<dh`>
yes, I'm sure we have
<companion_cube>
suppose you put a lot around each access, and it fixes your problem??
<companion_cube>
I mean
<dh`>
I start fuming whenever anyone starts talking about data races :-)
<dh`>
it won't
<dh`>
well
<dh`>
it won't with word-sized memory accesses and C code
<companion_cube>
cause you're talking about atomics?
<dh`>
since those are already atomic in the hardware
<companion_cube>
but locks can protect multiple words
<dh`>
in ocaml where you can assign large values it's slightly different
<dh`>
sure, but those aren't data races
<dh`>
those are real race conditions, where you read a and read b and someone else has simultaneously updated b so you get inconsistent values out
<companion_cube>
๐
<companion_cube>
guess what, rust can also protect you from that stuff
<companion_cube>
you can read a i128 in prace
<companion_cube>
peace
<dh`>
yeah, but again the real problems come from reading two i128s
<companion_cube>
no, you could get a torn read no??
<companion_cube>
if someone else writes the second word of it
<dh`>
sure
<dh`>
ok, so maybe that gets you 1% rather than 0% and I was exaggerating before
<companion_cube>
what about, say, updating a hashtable while someone else is reading it?
<companion_cube>
cause rust protects you from that too
<dh`>
only at the cost of having a global lock for the entire hashtable
<companion_cube>
sure, and?
<dh`>
and you might as well not bother if you're interested in significant levels of parallelism
<companion_cube>
(I mean assuming you don't use a concurrent hashmap but whatever, yes, that's the easiest fix)
<companion_cube>
ohhhh you are so moving the goalposts
<companion_cube>
cause this fixes a real issue
<companion_cube>
with a lock
<companion_cube>
the incorrect program became correct
<companion_cube>
(it might be a rarely accessed hashtable, who knows? it's realistic enough)
<dh`>
yeah, you can also just put a biglock around the entire program
<companion_cube>
you can also format your hard drive
<companion_cube>
but this is real situation that rust does protect you from
<companion_cube>
(I know it's real because I've had the issue in OCaml5)
<companion_cube>
and it's fixed with a lock
<companion_cube>
doesn't seem useless to me
<dh`>
why are you bothering if you're going to biglock the whole table?
<companion_cube>
because again, it might not be accessed in a hot loop
<companion_cube>
but you still want to access it correctly
<companion_cube>
locks aren't that slow if contention is low
<dh`>
why bother? if it's not high traffic, why not, e.g., have a thread that owns it and send the thread messages?
<companion_cube>
a whole thread for that ?? why
<discocaml>
<yawaramin> let's say a fiber, not a thread
<dh`>
why not? threads are supposed to be cheap
<dh`>
sure, pthreads aren't
<companion_cube>
sure, now you want to implement channels
<dh`>
neither ocaml nor rust has that problem
<companion_cube>
how do you ensure the proper use of channels?
<companion_cube>
oh look, rust also protects you from using a value after sending it through a channel
<companion_cube>
OCaml doesn't
<companion_cube>
isn't that interesting?
<dh`>
for simple cases like the one you're talking about, it's no harder than using the lock and more robust
<companion_cube>
you need to switch to the other thread/fiber and back
<companion_cube>
but sure, whatever
<dh`>
yeah, that takes about a dozen instructions
<companion_cube>
๐คจ
<dh`>
if you care about performance enough to object to that, you also can't have a biglock :-)
<companion_cube>
now thread switching is faster than a lock? what
<companion_cube>
I didn't say a biglock, I said a lock around a table that's accessed from time to time
<companion_cube>
the lock is also a few instructions in the fast path, and you don't context switch
<dh`>
channels are more robust than locks, was the point
<companion_cube>
maybe, depends
<dh`>
and this problem is still artificial
<companion_cube>
says you
<dh`>
suppose you have two tables, a forward table and a reverse index
<companion_cube>
I put a lock around a record with both inside
<companion_cube>
next question?
<dh`>
or a table but it needs to be consistent with something else
<dh`>
yeah
<dh`>
and guess what? that's not a data race problem
<dh`>
except in a very vague sense
<companion_cube>
it can be both
<companion_cube>
the lock protects you against data races _and_ a broader race condition
<companion_cube>
(channels aren't perfect either, you can deadlock with them just as easily)
<dh`>
yeah, and the broader race condition is the real problem, the data races are artificial
<companion_cube>
man I just told you I had this kind of problem in a real program
<companion_cube>
don't go and tell me it's artificial
<dh`>
ok fine, it's real
<dh`>
for this particular program it happens to be adequate to lock this table by itself, and locking the whole table at once isn't prohibitively expensive
<companion_cube>
contention is the problem, not locking
<dh`>
the general case requires stronger consistency, and there are various ways to get it but data race freedom doesn't get you very far
<companion_cube>
in the bidirectional case, it'd tell you you need a lock or something
<companion_cube>
then you can look and realize the lock needs to protect both tables
<companion_cube>
and well, it's better than not having any warning until the race condition shows up
<dh`>
and the problem with "fearless concurrency" is not so much that some of these widgets can't be helpful but that it's pretending to solve all your problems for you
<dh`>
the warning comes when you enable threading
<dh`>
shouldn't do that without having already planned out what needs to be protected and how
<companion_cube>
who says "solve all your problems"? they all c ome with warning labels
<dh`>
that's what the advertising buzz is about
<dh`>
"fearless"
<dh`>
not "here are some bits that might help you occasionally"
<companion_cube>
"here are some bits that are pretty helpful, the rest is a problem for everyone anyway" is indeed longer
<dh`>
yeah but you can be FEARLESS
<dh`>
as opposed to needing to understand what you're doing
<companion_cube>
I'd feel a lot less fearful for sure
<dh`>
it promotes complacency
<companion_cube>
let's also throw away static types
<companion_cube>
they don't fix all the bugs
<dh`>
nobody also goes around saying FEARLESS FUNCTIONS
<dh`>
also reasonable types catch a considerably larger fraction of the problems
<companion_cube>
vibes
<dh`>
meanwhile, as of the last time I looked at the rust book at least, the FEARLESS CONCURRENCY section didn't even mention the existence of condition variables, let alone offer anything to help with them
<companion_cube>
well the stdlib offers channels, so you should favor that I guess
<companion_cube>
and tokio also tends to suggest channels
<dh`>
I do favor channels for a lot of things
<companion_cube>
(I've never used a condition variable for anything but a queue)
<dh`>
IDK what you've been writing, but condition variables are the common case
<companion_cube>
so you should be happy about rust
<companion_cube>
hu
<companion_cube>
I mean I use more locks than condition variables, starting with the fact that each condition requires a lock
<dh`>
I like that rust has channels
<dh`>
it's the attitude and marketing I don't approve of
<dh`>
you'll routinely have multiple condition variables for a single lock
<dh`>
the lock protects some state, you generally want a separate condition variable for each state transition that you need to reason about concurrently
<companion_cube>
like queue empty/queue full??
<dh`>
yeah
<dh`>
you can share of course but then you get thundering herds
<companion_cube>
heh, sure
<companion_cube>
but again it's mostly for queues
<dh`>
and for that matter, you can just busyloop and not use any condition variables at all, it'll just make your life miserable
<dh`>
well
<dh`>
it's anything that has state transitions
<companion_cube>
eg I do promises with an atomic and callbacks
<dh`>
ethernet driver, for example: when do you get carrier? when is the interface "up" (in the sense of being able to send packets)? when is it ready for another packet? when is there a packet ready to read?
<dh`>
stuff like that
<companion_cube>
guess it depends on whether you have a whole thread for that again
<companion_cube>
but I'm not writing ethernet drivers either, so
<dh`>
yes, you can have a thread own the driver
<dh`>
for a lot of drivers that's a very reasonable thing to do
<dh`>
for network drivers, not so much because you've gotta be able to suck down gigabit rates
<dh`>
and there's all kinds of wild stuff out there as a result
<dh`>
anyway it does depend on what you're doing
<dh`>
the same way compilers don't generally need a lot of state abstraction
<dh`>
but kernels absolutely do
<dh`>
(and AFAIK linux still doesn't have condition variables 30 years later; they keep inventing broken substitutes)
<companion_cube>
they have RCU or whatever, idk
<dh`>
(but that's a whole other rant)
<dh`>
RCU is a lock substitute
<companion_cube>
the rest is most likely event driven, not with a full thread for each device
<dh`>
(the rest of which?)
<dh`>
(sorry, wasn't clear)
<companion_cube>
oh yeah I mean the network drivers and such
<dh`>
yes
<dh`>
the typical device driver is a halfassed mixture of event triggers, thread sleeps, and state machine
<dh`>
and so they're both concurrent and complicated
<dh`>
and this is why driver bugs are universal
<dh`>
drivers neither get the attention the core kernel does nor the workout
<dh`>
they are also mostly written by juniors or third-stringers who can't get out of it
<dh`>
and when they come directly from hardware vendors, often by hardware folks who don't quite understand software
<dh`>
note that even single-threaded drivers are still concurrent, because the device itself is running concurrently with the driver code
<dh`>
and except for the absolutely most simpleminded stuff you still need to deal with interrupts
<dh`>
but it's still a lot easier to think about that way
<dh`>
anyway
<dh`>
in that kind of environment data race condsiderations really don't get you very far
<dh`>
another example (in the core kernel) is the wait() family of syscalls
<dh`>
parent process goes to sleep until the or some (depending on details) child process exits, then it wakes up and collects the child process's exit status and returns it out
<dh`>
cannot be done with just locks unless you're into busy-looping
<dh`>
and the data accesses are not the tricky part
euphores has quit [Quit: Leaving.]
euphores has joined #ocaml
trillion_exabyte has quit [Ping timeout: 268 seconds]
<olle_>
"Memory Safety Without Tagging nor Static Type Checking"
<olle_>
Anyone heard about this strategy?
_whitelogger has joined #ocaml
Everything has quit [Quit: leaving]
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
szkl has joined #ocaml
Frostillicus has joined #ocaml
olle_ has quit [Ping timeout: 244 seconds]
Frostillicus has quit [Quit: Frostillicus]
Frostillicus has joined #ocaml
inline has quit [Quit: Leaving]
Frostillicus has quit [Ping timeout: 252 seconds]
Frostillicus has joined #ocaml
bartholin has quit [Ping timeout: 248 seconds]
Frostillicus has quit [Ping timeout: 248 seconds]
ridcully has quit [Quit: WeeChat 4.6.3]
humasect has joined #ocaml
ridcully has joined #ocaml
bartholin has joined #ocaml
hanker has joined #ocaml
inline has joined #ocaml
ridcully has quit [Ping timeout: 252 seconds]
trillion_exabyte has quit [K-Lined]
ridcully has joined #ocaml
cr1901_ has quit [Read error: Connection reset by peer]
ridcully has quit [Quit: WeeChat 4.6.3]
cr1901 has joined #ocaml
Frostillicus has joined #ocaml
ridcully has joined #ocaml
inline has quit [Quit: Leaving]
euphores has quit [Quit: Leaving.]
euphores has joined #ocaml
Frostillicus has quit [Ping timeout: 265 seconds]
iNomad has quit [Quit: leaving]
inline has joined #ocaml
<discocaml>
<vortex1000> Hi , i am Krenar from Albania , i am 23 year old and i have studied computer science . If you are interested to buy a JetBrains licence for 35$ please dm me ๐ .i can give you all the info you need in dm's if you are interested if not sorry . My discord is vortex1000
<discocaml>
<vortex1000> i can even give you the licence first if you first add me on facebook
Anarchos has joined #ocaml
humasect has quit [Quit: Leaving...]
divya has quit [Ping timeout: 252 seconds]
szkl has quit [Quit: Connection closed for inactivity]
divya has joined #ocaml
Frostillicus has joined #ocaml
<companion_cube>
This is off topic and probably a scam
wbooze has joined #ocaml
toastal has left #ocaml [#ocaml]
spynx is now known as spynxic
YuGiOhJCJ has joined #ocaml
<octachron>
(Note that the spam has been removed on the discord side)
Frostillicus has quit [Ping timeout: 260 seconds]
inline has quit [Ping timeout: 276 seconds]
Anarchos has quit [Quit: Vision[]: i've been blurred!]
wbooze has quit [Quit: Leaving]
Humean has joined #ocaml
Kaelta has joined #ocaml
Haudegen has quit [Quit: No Ping reply in 180 seconds.]
Haudegen has joined #ocaml
szkl has joined #ocaml
inline has joined #ocaml
Kaelta has quit [Quit: Konversation terminated!]
chiselfuse has quit [Remote host closed the connection]
chiselfuse has joined #ocaml
bartholin has quit [Remote host closed the connection]
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
Frostillicus has joined #ocaml
wbooze has joined #ocaml
Mister_Magister has quit [Ping timeout: 248 seconds]
Mister_Magister_ has joined #ocaml
Mister_Magister_ is now known as Mister_Magister
_whitelogger has joined #ocaml
cr1901_ has joined #ocaml
Humean has quit [Ping timeout: 252 seconds]
divya has quit [Read error: Connection reset by peer]