<discocaml>
<diligentclerk> yeah I'm wondering whether that's better than nothing or not
<discocaml>
<darrenldl> i guess depends on whether you want your analysis to be at least sound or at least complete
<discocaml>
<darrenldl> maybe coverage is the better word
<discocaml>
<darrenldl> anyhow, AI does neither, so not really sure what's the right measure of success
<discocaml>
<diligentclerk> correctness is only important here to the degree that incorrectness is going to waste my time by sending me on a wild goose chase, I think.
<discocaml>
<diligentclerk> I'll give it a shot
amadaluzia has joined #ocaml
<discocaml>
<darrenldl> then LLM is pretty much guaranteed to waste your time then, especially with OCaml code
<discocaml>
<darrenldl> i would personally just try putting together (haphazardly) a function call graph extraction tool, if one does not exist yet
<discocaml>
<darrenldl> the tool itself doesn't even need to do the plotting, with d2, mermaid, plantml, and graphviz dot around
<discocaml>
<darrenldl> proponents of LLM have been quite vocal about "only need to double check LLM's work manually is still faster than doing it by hand", but in this case it feels like to be able to double check, you'd need to read all the code anyway
<dh`>
a module dependency graph is probably more useful
<dh`>
call graphs are awkwardly large for something the size of rocq
<dh`>
also surprisingly hard to extract reliably
<discocaml>
<darrenldl> good point
<discocaml>
<darrenldl> yeah, easier to parse and extract just module usage
Haudegen has quit [Quit: Bin weg.]
amadaluzia has quit [Quit: You]
DecembersTruly has joined #ocaml
inline_ has joined #ocaml
DecembersTruly has quit [Quit: Coffee Break]
mercxry has quit [Server closed connection]
inline has quit [Ping timeout: 244 seconds]
mercxry has joined #ocaml
myrkraverk__ has joined #ocaml
myrkraverk_ has quit [Ping timeout: 252 seconds]
myrkraverk has joined #ocaml
myrkraverk__ has quit [Ping timeout: 244 seconds]
<discocaml>
<bluddy5> LLMs are often better at reading and explaining code than they are at creating code that considers all the subtleties of a large codebase. I know that in C++ they can be spectacular at diving in and giving you a good overview. But there's a huge disparity between models, to the point that you'll get entirely different experiences. Claude sonnet and chatgpt 5 will probably give you the best information. With sonnet, I was even able to essential
<discocaml>
<bluddy5> I have no idea if they're anywhere close to this in ocaml, but it's worth a shot.
marijanp has joined #ocaml
agentcasey has quit [Quit: ZNC 1.9.0+deb2build3 - https://znc.in]
agentcasey has joined #ocaml
YuGiOhJCJ has joined #ocaml
euphores has joined #ocaml
_whitelogger has joined #ocaml
polykernel_ has joined #ocaml
polykernel has quit [Ping timeout: 255 seconds]
polykernel_ is now known as polykernel
Tuplanolla has joined #ocaml
marijanp has left #ocaml [Disconnected: Hibernating too long]
Serpent7776 has joined #ocaml
bartholin has joined #ocaml
YuGiOhJCJ has quit [Quit: YuGiOhJCJ]
Serpent7776 has quit [Ping timeout: 272 seconds]
<discocaml>
<charstring.> When generating Python code, I've had it create better code by asking it to solve the problem in Haskell or OCaml and only then make the Python function.
<discocaml>
<diligentclerk> alright claude's website mentions a tool that does what I was talking about, you can just feed the whole Coq repo in and ask it a couple questions
romildo has joined #ocaml
Serpent7776 has joined #ocaml
toastal has quit [Ping timeout: 252 seconds]
LainIwakura has joined #ocaml
toastal has joined #ocaml
LainIwakura has quit [Quit: Client closed]
Serpent7776 has quit [Ping timeout: 244 seconds]
Haudegen has joined #ocaml
romildo has quit [Quit: Leaving]
tremon has joined #ocaml
inline_ is now known as inline
nemin has joined #ocaml
switchy_ has joined #ocaml
switchy has quit [Ping timeout: 252 seconds]
Serpent7776 has joined #ocaml
Serpent7776 has quit [Ping timeout: 252 seconds]
olle has quit [Ping timeout: 244 seconds]
emp has quit [Server closed connection]
emp has joined #ocaml
Haudegen has quit [Quit: No Ping reply in 180 seconds.]
Haudegen has joined #ocaml
nemin has quit [Quit: zzz]
Serpent7776 has joined #ocaml
remexre has quit [Server closed connection]
remexre has joined #ocaml
bartholin has quit [Remote host closed the connection]
euphores has quit [Ping timeout: 244 seconds]
switchy_ is now known as switchy
switchy has quit [Remote host closed the connection]
switchy has joined #ocaml
Serpent7776 has quit [Ping timeout: 255 seconds]
welterde has quit [Ping timeout: 276 seconds]
<discocaml>
<diligentclerk> I tried Flix and it seems interesting. In OCaml algebraic effects are handled kind of "individually" - one "exception" corresponds to a single exception handler. But in Flix effects are whole namespaces with grouped/related functionality, there is a `Console` effect and you can do `Console.write`, `Console.read`, etc; it looks almost like dynamically scoped *modules*. In the the "Hello world" wordcount program on their website, the main
<discocaml>
<diligentclerk> and it seems like the read is that `Console` and `FileReadWithResult` should be here understood as something like module types / interfaces, where we require that at the call site of `wc` there should be dynamically installed "handlers" which specify modules implementing the effect `Console` and `FileReadWithResults`.
<discocaml>
<diligentclerk> This is complicated and novel to me so I'm trying to better understand it. Can we make this precise, has anyone written down an ML with "dynamically scoped modules", in which, like Flix, a function requires static annotations as to what modules are in scope? In what ways would this be more expressive/less expressive than modular implicits? It seems like in some ways the functionality would overlap.
<discocaml>
<diligentclerk> This seems relevant. Modular implicits are implicit parameters, after all, so I might be mixing some things up.
<discocaml>
<diligentclerk> Although I am never quite sure whether a distinction between A and co-A is really meaningful or a matter of perspective. For example, by currying a function A x B -> C is equivalent to a function B -> C^A. But (A x -) is a co-monad and (-)^A is a monad. The difference between "this is a comonad" and "this is a monad" is just a matter of perspective here. So saying "ah, this is really a co-monadic phenomon rather than a monadic one" doesn