<mischief>
trying to build on a riscv64 system.. build halts here but there's not much info in the log. ERROR: Failed to spawn fakeroot worker to run /home/mischief/src/poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_6.12.bb:do_install: [Errno 32] Broken pipe
ipgd has joined #yocto
ipgdbali has joined #yocto
ipgd has quit [Remote host closed the connection]
ipgdbali has quit [Remote host closed the connection]
ipgd has joined #yocto
jclsn has quit [Ping timeout: 272 seconds]
jclsn has joined #yocto
paulg has quit [Ping timeout: 268 seconds]
alperak has quit [Quit: Connection closed for inactivity]
risca has quit [Quit: No Ping reply in 180 seconds.]
risca has joined #yocto
<khem>
mischief:seems unrelated to riscv
<khem>
more related to your build machine
<mischief>
yea.. im thinking maybe im missing a kernel config on the host, this is the kernel the vendor provided with their.. debian fork :|
leon-anavi has joined #yocto
wooosaiiii has joined #yocto
alperak has joined #yocto
mckoan|away is now known as mckoan
wooosaiiii has quit [Remote host closed the connection]
wooosaiiii has joined #yocto
ptsneves has joined #yocto
ptsneves has quit [Ping timeout: 272 seconds]
Tyaku has joined #yocto
jmd has joined #yocto
ipgd has quit [Read error: Connection reset by peer]
prabhakalad has quit [Ping timeout: 244 seconds]
prabhakalad has joined #yocto
zeemate has joined #yocto
mckoan is now known as mckoan|away
<RP>
mischief: a failure at that point is pseudo failing to start
<michaelo>
Greetings. Does anyone have examples using kas without poky? With poky, kas works automagically, but when I want to just use openembedded-core and bitbake separately, I can't seem to make it take my bitbake repository.
<michaelo>
I found a solution. I had to add this to the bitbake repository:
<michaelo>
layers:
<michaelo>
bitbake: excluded
ablu has quit [Ping timeout: 245 seconds]
ablu has joined #yocto
<rburton>
michaelo: yes that :)
<rburton>
bitbake isn't a layer
druppy has joined #yocto
<michaelo>
Indeed! However, the kas docs don't tell how to work with a separate bitbake repository
<rburton>
someone with experience writing documentation should fix that ;)
<rburton>
but yes it does sort of gloss over how the build system is configured
<RP>
rburton: I've been queueing your earlier version of the libmodule change since I think that one should be ok with git tag/mirroring as it was upgraded recently
alperak has quit [Quit: Connection closed for inactivity]
<rburton>
mischief: can you replicate that with an entirely fresh poky tree and no changes? and i presume you've verified that python3 -c 'import encodings' does actually work
<mischief>
the tree has no diff, the only thing i did was set MACHINE=qemuriscv64 and add INHERIT:remove = "uninative" to local.conf, because there seems to be no uninative for riscv64
savolla has joined #yocto
<mischief>
host python3 works as far as i can tell, and the bug happens in the same way both the host and a debian container i made just to test this
goliath has joined #yocto
Fr4nk has quit [Quit: WeeChat 4.6.3]
Fr4nk has joined #yocto
npcomp has quit [Ping timeout: 245 seconds]
<RP>
mischief: you're building on a riscv64 machine?
<RP>
mischief: are there any PYTHON* environment variables needed for your python to work correctly?
<mischief>
no, there is nothing related to python in the environment by default
<mischief>
yes, it is a riscv64 native host.. mostly tried this for fun
<RP>
mischief: any pseudo.log files in TMPDIR that shed any light on what pseudo is doing?
<RP>
mischief: I'm not sure anyone has checked if the pseudo libc intercepts are right for riscv64
<rburton>
oh yeah i'd expect pseudo needs a port
<rburton>
pseudo needs compatible_host entries
<mischief>
there are no pseudo logs anywhere that i see
<RP>
mischief: If it were me, I'd probably find the pseudo binary and try "pseudo bash"
druppy has quit [Ping timeout: 244 seconds]
<mischief>
it seems extremely unhappy trying to run it by hand
<RP>
mischief: that definitely doesn't look happy and is almost certainly related
<mischief>
am i supposed to set PSEUDO_PREFIX/-P when running pseudo by hand?
<fray>
there are basic pseudo test cases, I would suggest building and trying pseudo outside of the build system if you are trying to run it on riscv first, and then forcus on YP integration. There are a couple of cases in the pseudo tests that are known to fail, the rename and remove tests specifically
<RP>
fray: this is on riscv64 so I suspect there is something more fundamental wrong
<RP>
mischief: I think even if you set PSEUDO_PREFIX, there are bigger issues there :/
<fray>
no doubt
<RP>
bitbake knows how to get that much right
<fray>
that's why I was suggesting running pseudo by itself with it's test cases, that would explain if it even has a change of working
<RP>
agreed, those would help narrow things down further
<Fr4nk>
Hi everybody, anyone knows how to generate an html list of third-party licenses of my Poky OS, with a bitbake command?
<RP>
mischief: interesting, it looks like some elements are working at least
leon-anavi has quit [Quit: Leaving]
leon-anavi has joined #yocto
<rburton>
hash question: if I have a class that does export FOO=${TMPDIR}/blaa and it should absolutely be ignored when working out hashes for rebuilds etc, is that BB_BASEHASH_IGNORE_VARS+="FOO"?
<RP>
rburton: In theory but I can't remember if that variable is recipe specific
dturull has joined #yocto
<dturull>
Hi RP. I'm working on the patch to filter into spdx only the files used during compilation. From what I saw, the kernel is the only recipe that puts the sources in a different location in the target that differs on what SPDX is using. Will it be ok to modify the patch in the debugsource information that I'm storing to match what is put into the
<dturull>
spdx? Then the logic is simpler. For example "linux-yocto-6.12.31+git/net/ipv4/tcp_ipv4.c" instead of only net/ipv4/tcp_ipv4.c. or is better to have the case in spdx_common.py
<RP>
dturull: I'm wondering why they're different and if we could somehow unify that
<fray>
any recipe can use work-shared, but few do
<fray>
the internal binary representations (for the most part) use on-target filesystem paths. So for things that don't (for whatever reason) then special code may be needed..
<fray>
(I tried tod o a filter like this in the past, what I ended up doing was warning when something looked like it needed custom filtering)
<dturull>
RP to unified probably we need to change something in kernel.bbclass
<dturull>
I can also look at it, for now what I have in the if not kernel_src else src.replace(f"{kernel_src}/", f"{bp}/")
<dturull>
this seems to work, then for the spdx code is transparent.
Net147 has quit [Ping timeout: 248 seconds]
Net147 has joined #yocto
Net147 has quit [Changing host]
Net147 has joined #yocto
<RP>
dturull: fair enough, that sounds like an improvement on v7
<dturull>
I'll try tomorrow to send a new version. I'm verifying with a world build that nothing brakes
dturull has quit [Quit: Client closed]
Minvera has joined #yocto
prabhakalad has quit [Ping timeout: 248 seconds]
prabhakalad has joined #yocto
goliath has quit [Quit: SIGSEGV]
prabhakalad has quit [Ping timeout: 248 seconds]
prabhakalad has joined #yocto
<mischief>
okay, found part of the problem
<mischief>
stat was broken
jclsn has quit [Quit: WeeChat 4.6.3]
Net147 has quit [Ping timeout: 248 seconds]
Net147 has joined #yocto
Net147 has quit [Changing host]
Net147 has joined #yocto
goliath has joined #yocto
olani_ has joined #yocto
Articulus has quit [Quit: Leaving]
florian has joined #yocto
<mischief>
sent a mail that should fix stat at least. my build is now progressing a bit more..
<sotaoverride>
anyone got any experience with the flutter-pi embedder? Im trying to run with the core mininal image on a rpi4 and Im getting this: flutter-pi.c: Could not query DRM device list: No such file or directory
druppy has quit [Ping timeout: 248 seconds]
Kubu_work has joined #yocto
goliath has quit [Quit: SIGSEGV]
cambrian_invader has quit [Ping timeout: 268 seconds]
cambrian_invader has joined #yocto
frgo has quit [Read error: Connection reset by peer]
frgo_ has joined #yocto
leon-anavi has quit [Remote host closed the connection]
zeemate has joined #yocto
<rburton>
mischief: oh wow
<rburton>
mischief: what board is that?
goliath has joined #yocto
<mischief>
rburton: milk-v megrez.
<mischief>
i'm trying to find something to do with it, and since i added a kernel for it to meta-riscv a while back i figured why not make it build yocto for itself ;)
<mischief>
or, maybe at least try out the riscv vm extensions with qemu/kvm..
<rburton>
maybe as a build machine isn't its calling :)
<rfs613>
i could give you some even slower machines :)
<rfs613>
hmm, is there a recommended way to append some shell commands to do_patch (which is a bitbake python function)? I can do os.system("...") but it seems kludgy, is there a better way?
<mischief>
i too have even slower machines, but none of them are riscv
<mischief>
mildly concerning that a process randomly died under load and seems to be working fine the next run
<mischief>
[48458.750910] nativesdk-libgc[910027]: unhandled signal 11 code 0x1 at 0x00007dffb88927c8 in python3.12[10000+5b3000]
<rfs613>
mischief: maybe ran out of memory (or some other resource?), perhaps due to concurrent compilation of other recipes?
<mischief>
it's only got 4 cores, but 32G of memory, and i would not really expect OOM to cause SIGSEGV
<mischief>
well, now there's a real problem. no libgcc... maybe ill poke it more later. http://0x0.st/8EsY.947273
<rfs613>
oh right, signal 11 is segfault, for some reason i read it as errno 11 (EAGAIN)
* rfs613
adds a task to run after do_patch
<RP>
rfs613: you know about postfuncs ?
drkhsh_ is now known as drkhsh
<rfs613>
RP: had kind of assumed that could only be used for python functions
<RP>
rfs613: no, you can list shell and python funcs in there
<RP>
you just can't call back into python from a shell function
<fray>
mischief: were the tests in the pseudo source helpful in identifying the issue(s)? If not, then we should add a stat test
<RP>
mischief: I merged that patch, thanks
<rfs613>
RP: indeed postfuncs did work for calling my shell function, thanks
<mischief>
the test-parallel-* tests are broken still, but i ran the build only fixing stat and it seems to work
<mischief>
RP: thanks!
<fray>
I'm really trying to make sure we have basic tests for things that can fail, without having to write everything
<fray>
ya, I expect teh test-parallel mv and rm will fail (they do on x86)
<RP>
mischief: we added the parallel ones knowing they break. It is a real bug we just don't know how to fix :/
<fray>
(and luckily is not a workflow we exercise often, but we need to make sure people know it's an issue)
<mischief>
is it a race?
<fray>
yes.. between the filesystem, the database and the pseudo communication system
<RP>
mischief: effectively
<fray>
if two things cause a rename (or remove and recreate) the same file because the communication and db is updated, then future calls can come in and further change and we lose the ability to track the file(s)
<fray>
so ya, effectively a race, but it's not a tradition "just add a lock"
jmd has quit [Remote host closed the connection]
<khem>
RP: there is one issue I am seeing when setting TOOLCHAIN ?= "clang" and building gdb-cross-x86_64, it reports TOOLCHAIN = gcc but TCOVERRIDE = toolchain-clang
<khem>
RP: I wonder if defering recipes inheriting cross.bbclass has issues, I am on latest master-next btw.
<khem>
I am travelling and not able to take a deeper look myself
<RP>
khem: hmm, that does sound odd. I'll have to try and take a look and see what is going on
<fray>
khem, did you see the risc-v stuff I sent to the architecture list? It introduces a new lib/oe/tune.py for tune related functions.. anyway, I was considering writing a 'TUNE' (tune_features) to gcc multilib-generator function..
<fray>
Any thoguths?
<khem>
fray, I have seen the email, but reading on mobile takes a while
<fray>
ya, it's not a short writeup or patch
<fray>
(I already have a new version of the patch, works the same way -- just moves some code in the tune.py to make it more efficient)
<khem>
fray: it will be good to keep clang in mind, are you using + separator to construct mcpu/march ?
<fray>
the march is created following the rules in the risc-v ISA manual.. multi-character extensions are prefixed with "_", and any further extensions must also use _.. I do not verify alpha numeric or anything like that, just the tune_features that are implemented through the normal methods..
<fray>
the one thing I did have to do is keep the ISA in the order that GCC expects it, which I suspect clang is fine with
savolla has quit [Ping timeout: 276 seconds]
<fray>
rv 32 i m a c zicsr zifencei will produce "-march=rv32imac_zicsr_zifencei -mabi=ilp32"
<fray>
change that to rv 32 im a f c zicsr zifencei and the produced output changes to "-march=rv32imafc_zicsr zifencei -mabi=ilp32f"
<fray>
note when I say output I'm talkign TUNE_CCARGS, there is an intermediate TUNE_RISCV_MARCH and TUNE_RISCV_MABI which get rv32imafc_zicsr_zifencei and ilp32f respectively
<khem>
yes _ will work with clang too
<khem>
fray: did you consider full list ?
<fray>
I only considered items specifically listed.. with the exception of formatting which I followed the information for the ISA manual (I)
<fray>
it's explicit that single character should be first, with multicharacter last.. While we only have a few Z extensions, there are also S extesions, but we don't ahve any
<fray>
I will only implement ones that we have specific users for
<fray>
far too many extensions to try to do them all (and no way to test it)
<fray>
Z, Sh, Sm, Ss, and X are the multicharacter extensions..
<fray>
we have some Z, no S*, and in the write up, I said 'X' extensions below in a specific tune and NOT in the arch-riscv
<khem>
hmm, but is it extendible by simply adding new tunes
<fray>
same if a tune specifies an mcpu and/or mtune, they should be _after_ the -march
<fray>
yes
<khem>
I want the core logic to be solid and reusable
<fray>
tune-processor.inc must include a previous tune-processor.inc or arch-riscv.inc
<khem>
you/amd will only use a small subset, I understand that
<fray>
the arch-riscv.inc needs to be in the processing order GCC expects for the generated name
<fray>
tunes can extend this later and or override the TUNE_CCARGS in a tune specific way
<fray>
tunes can NOT change pkgarch or arch, that is 'global'
<fray>
rv32i, rv32e, rv64i, rev64e as base are defined, plus the extensions M A F D C B V Zicsr Zifencei Zba Zbb Zbc Zbs and Zicbom
<fray>
tune features are 'rv', XLEN (32 or 64), e or i, and then the extensions (all lower case)
<fray>
the oe.tune.riscv_isa_to_tune("...") will convert ISA notion to TUNE notion, and can handle the 'g' abbreviation, but we do not support 'g' in the features
<fray>
oe.tune.riscv_isa_to_tune("rv64gc") which would return: rv 64 i m a f d c zicsr zifencei
<fray>
we do NOT support 'profile' notation. Only ISA notation.
<khem>
thats ok I guess 'g' is acronym
<fray>
'g' is an 'abbreviation' per the manual
<fray>
g = base + imafd_zicsr_zifencei
<fray>
not you can specify a 'feature' of 'b', but it will expands in march= to zba_zbb_zbs ... there is a "hole" there that a recipe that needs to know about 'zbb' will need to check for either 'b' or 'zbb' for instance
<fray>
current tune stuff has no 'b implies zbb' behavior, might be something to add in the _future_, but not now.. now we live with what we have until this works
<khem>
e.g. rva22u64 is a profile would it be part of above set ?
<fray>
rva22u64 is a profile, we don't support profiles.. they have no direct meaning, so we'd need an incededibly complex function to do the translation
<fray>
it's possible it can be implemented, but simply doesn't seem worth it
<khem>
profiles are what will be used atleast for linux or other hosted envs and for micros too rvmXX
<fray>
linux doesn't use profiles that I can find (anymore) it definitely did in the past
<fray>
reading between the lines in the current ISA document, it sure seems like profiles are the 'old way' of doing this, but they don't recommend it
<khem>
I think RV international is interested in grouping with profiles more than ISAs
<fray>
I was reading it the opposite, everything is now base ISA + extension, profiles don't have a lot of meaning
<khem>
hmm it must be new way :)
<fray>
20250508 - Ratified RV Instruction Set Manual Volume I
<fray>
again I'm reading between the lines here. Profiles just don't seem to be anything recommended (anymore)
<khem>
its something you can contruct using ISA + extentions so perhaps not a big deal but we need to see if thats the case where it is deprecated or not
<fray>
there is absolutely no reason (that I see) that we should use/support profiles. All they do is obfuscate what is actually happening. But I did leave room that someone could implement a 'profile -> tune_features' converter if there is value in it, but I believe focusing on the ISA makes far more sense for us
<khem>
on positive side it will make us build any combination of ISA + ext
<fray>
there are some conflicts, like i and e... but ya, it's designed to allow any 'reasonable' combination
<khem>
I will take a look in evening once I am back from dinner
<fray>
sounds good
<fray>
(there is also a bug in the build of OpenSBI -- Yp bugzilla filed on that)
<fray>
15897
goliath has quit [Quit: SIGSEGV]
jpuhlman has joined #yocto
savolla has joined #yocto
Kubu_work has quit [Quit: Leaving.]
Kubu_work has joined #yocto
npcomp has joined #yocto
savolla has quit [Ping timeout: 252 seconds]
zeemate has quit [Ping timeout: 248 seconds]
savolla has joined #yocto
<mischief>
any ideas why ld can't find -lgcc in my riscv64 build? there is a libgcc.a in the tree where --sysroot points..