druppy has quit [Read error: Connection reset by peer]
druppy has joined #yocto
florian has joined #yocto
ptsneves has quit [Ping timeout: 244 seconds]
leon-anavi has joined #yocto
druppy has quit [Read error: Connection reset by peer]
frieder has joined #yocto
<rburton>
RP: pro-tip: don't set number of parse threads to 1
<RP>
rburton: heh, what does that do? :)
<RP>
rburton: I did fix master-next so I'm curious if it performs better/worse btw
Kubu_work has joined #yocto
Guest18 has joined #yocto
Vonter has quit [Ping timeout: 260 seconds]
Vonter has joined #yocto
<Guest18>
I have a project and it normally uses gcc instead of clang. But, it needs clang when building a thing. Is adding clang-native to DEPENDS the right approach for this?
<mcfrisk>
Guest18: see TOOLCHAIN variable
<RP>
mcfrisk: it sounds like Guest18 may need both
<Guest18>
gcc is used throughout the project, but they use clang to build something external. Do I still need toolchain = “clang”? I guess I don't need to add it as toolchain if the clang-native i will add to depends solves the problem, right?
<RP>
Guest18: I'd add the DEPENDS and try that, yes
vladest has joined #yocto
<Guest18>
mcfrisk RP thanks
Fr4nk has quit [Ping timeout: 252 seconds]
Fr4nk has joined #yocto
<Guest18>
Is the addition of clang to oe-core something new? It seems like it wasn't there in previous versions and we were using meta-clang.
dmoseley_ has joined #yocto
dmoseley has quit [Ping timeout: 245 seconds]
Guest82 has quit [Quit: Client closed]
<RP>
Guest18: new and still being worked out in places
<rburton>
File "/usr/lib/python3.12/cProfile.py", line 109, in runcall
<rburton>
self.enable()
<rburton>
ValueError: Another profiling tool is already active
<RP>
rburton: did master-next make it better/worse/same ?
<rburton>
i didnt try next yet, i'll do that now
ablu has quit [Ping timeout: 252 seconds]
ablu has joined #yocto
Guest9 has joined #yocto
<Guest9>
fatal error: 'bits/syscall-32.h' file not found
<Guest9>
| 23 | #include <bits/syscall-32.h>
<Guest9>
Isn't it glibc header? Added to glibc to DEPENDS but still complain.
<JaMa>
isn't it from native build? e.g. target nodejs build will use -m32 to build native mksnapshot when building for 32bit target MACHINEs
<Guest53>
how do one use "subdir" in "walnascar"? In scrathgap, it used to work like this SRC_URI += "file://abc;subdir=git/aa/ab/bc/".
Kubu_work has joined #yocto
<Guest9>
Guest53 I don't know how to use but just fetch oe-core(walnascar) and meta-oe(walnascar) and grep "subdir=" then check recipes how they use.
<Guest53>
Guest9 thank you very much
<rburton>
RP: marginally slower
<rburton>
RP: actually i might have done that with the wrong branch
<rburton>
running again
Guest9 has quit [Quit: Client closed]
Razn0r has joined #yocto
rob_w has quit [Remote host closed the connection]
<rburton>
RP: slightly faster for middling number of threads (4-16), otherwise the same. for 8 threads, 76s vs 62s
<rburton>
after 32 threads, its faster by a couple of seconds
<RP>
rburton: ok, cool. That should be due to better load balancing
<rburton>
RP: its just build/cache to remove if i want to benchmark the parse, right? not build/tmp/cache too?
florian_kc has joined #yocto
<RP>
rburton: correct, although it does mean you won't include the initial "codeparser" hit but that generally just works
cyxae has joined #yocto
Kbo has quit [Quit: Client closed]
vthor has joined #yocto
vthor has quit [Changing host]
vthor has joined #yocto
ello_ has quit [Read error: Connection reset by peer]
<JPEW>
RP, rburton : Random guess that the limit is how quickly the results can be written back through the queue to the main thread
ello_ has joined #yocto
goliath has quit [Quit: SIGSEGV]
<rburton>
tried to glue some perfetto logging into the parser but the only library i can find to do that easily isn't actually very good
<RP>
JPEW: that would be my guess too
<rburton>
meta-python-image-ptest.bb seems to take 20s to parse
<rburton>
all the ptest recipes take about 20s to parse
<JPEW>
There do seem to be a few long tails
paulg has joined #yocto
<JPEW>
On advantage of the "pass the list" option is that we could potentially sort it if there was a good heuristic to put the longer parsing items at the front?
<RP>
rburton: the ptest recipes are really slow
<RP>
rburton: they get parsed as one unit hence the time delay
Guest24 has joined #yocto
Daanct12 has quit [Quit: WeeChat 4.6.3]
<RP>
JPEW: I did try moving the ptest ones first and it does help. I needed the queue piece before that would really help though
<rburton>
yeah, good to know they're the cause for the tail though :)
<RP>
I did check that!
<RP>
The current patch at least allows things after them to be cleared
<rburton>
i must be doing something wrong with the perfetto log generation as it claims that only five parser threads actually do work
<RP>
rburton: I don't quite believe that :/
<rburton>
no me neither
jmiehe has joined #yocto
rfuentess has quit [Remote host closed the connection]
Guest24 has quit [Ping timeout: 272 seconds]
Guest24 has joined #yocto
Guest24 has quit [Quit: Client closed]
Guest24 has joined #yocto
Xagen has joined #yocto
jmd has joined #yocto
zhmylove has joined #yocto
jmiehe has quit [Quit: jmiehe]
Guest53 has quit [Ping timeout: 272 seconds]
Guest24 has quit [Quit: Ping timeout (120 seconds)]
Kubu_work has quit [Quit: Leaving.]
Guest24 has joined #yocto
<JPEW>
RP: Sent the patch that just does the shared counter
florian has quit [Ping timeout: 244 seconds]
florian_kc has quit [Ping timeout: 260 seconds]
frieder has quit [Remote host closed the connection]
Guest24 has quit [Ping timeout: 272 seconds]
florian has joined #yocto
<RP>
JPEW: thanks
florian_kc has joined #yocto
leon-anavi has quit [Quit: Leaving]
Guest24 has joined #yocto
zhmylove has quit [Remote host closed the connection]
<rburton>
in my testing the shared counter patch is actually a little bit faster
<RP>
rburton: from JPEW's description, it should be nicer
<rburton>
its marginal, but its also neater code :)
Guest24 has quit [Quit: Client closed]
<RP>
rburton: yes, it works in a nicer way too, less horrible queue processes
<rburton>
its scaling exactly the same as the patches that were in next earlier today
<RP>
rburton: I suspect it is the same bottleneck of the return queue
<RP>
rburton: A bb.warn(len(pending)) in there could be interesting, see how much it backlogs
<rburton>
i'm so tempted to write a better perfetto library for py and let us throw a load of tracing data into bitbake
zwelch has quit [Quit: Leaving]
zwelch has joined #yocto
<LetoThe2nd>
rburton: can't AI do that?
<Crofton>
NO EMOJIS
<LetoThe2nd>
Crofton: You all know I ❤️ you right? 📣😁
<Crofton>
lol
michaelo has quit [Quit: leaving]
michaelo has joined #yocto
florian has quit [Quit: Ex-Chat]
florian_kc has quit [Ping timeout: 248 seconds]
Razn0r has quit [Ping timeout: 272 seconds]
<rburton>
khem: why does clang/common.inc set BPN=clang?
florian has joined #yocto
Jones42 has quit [Ping timeout: 252 seconds]
zeemate has quit [Ping timeout: 248 seconds]
<khem>
rburton: It was for multilib builds but it perhaps should be in bb file
<khem>
RP: is there a worker which has buildtools tarball preinstalled
Jones42 has joined #yocto
geoffhp has quit [Remote host closed the connection]
Jones42 has quit [Ping timeout: 248 seconds]
savolla has quit [Quit: WeeChat 4.6.3]
ptsneves has joined #yocto
geoffhp has joined #yocto
<JPEW>
rburton: I did a stupid hack to make it "remember" the parsing time for files and then sort them longest to shortest next time parsing runs
<JPEW>
It didn't make much different for core, but core also doesn't have a lot of long parses
<JPEW>
Or rather, it has one 15 second parse, and the rest are < 1 second
<rburton>
yeah my testing is with meta-oe also which has a few magic ptest recipes
<JPEW>
rburton: I'll try that
<JPEW>
It makes very little difference; about 500ms
<JPEW>
which, is not terribly suprising given how short the short parses are
<rburton>
well viztracer looks interesting: profiling tool that uses perfetto for display and lets you add custom events easily from code
<rburton>
something to try tomorrow morning to actually break down time usage in the parse
<rburton>
i'm also interested in why 'loading cache' takes 3 seconds on my 128 core machine with 256gb ram and nvme storage
<rburton>
RP: good news! i think i've managed to rip lldb out of the clang recipe
<rburton>
a few ugly bits but i'll post a WIP branch with metrics tomorrow morning