michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
<iive>
BtbN, I know the developers working on mesa were fired.
<BtbN>
So we probably have some rather large chunks of rather popular code now completely unmaintained :/
<BtbN>
I don't think anyone will care about the QSV issue
Kimapr_ is now known as Kimapr
<BtbN>
not sure who works on libva
<BtbN>
jkqxz is listed for large parts of it at least. But all the other names I think are Intel-Employees
<iive>
intel, are cutting muscle... they might not be able to walk out of their current problems. unless there is sudden hunger for a lot of medium quality silicon. like for weapons.
Kimapr has quit [Remote host closed the connection]
Kimapr has joined #ffmpeg-devel
<iive>
n8 ppl
iive has quit [Quit: They came for me...]
<kasper93>
yeah, Intel situation is dire.
<BtbN>
I feel like what'll happen is the government bailing out the silicon part of it, cause it's way too big of a strategical asset for them.
<BtbN>
And everything else will be sold to the highest bidder
<BtbN>
But then again, AMD was in a worse spot, and came back. So probably to early to say anything.
pelotron has joined #ffmpeg-devel
pelotron has left #ffmpeg-devel [#ffmpeg-devel]
<kasper93>
AMD gambled and won with Ryzen.
<Yalda>
I have to rely on Intel GPU now and might just end up using VAAPI and skip the QSV/VPL altogether. Luckily I am just doing 264
<Yalda>
for a specific fixed use lab at least
<BtbN>
Ironically, Intels GPU division isn't doing too horrible
<BtbN>
So we need Nvidia buy out Intels CPU segment, and start making x86 CPUs, and then Intel going full GPU
<BtbN>
I just thought of a potential bot/workflow for merge on Forgejo: A simple workflow that gets active when the LGTM lable is set on a PR, and all other conditions for merges are fullfilled.
<Yalda>
NVENC is still king to me
<BtbN>
In that case, it will either: FF-Merge the PR, and if an FF is not possible, rebase it, and repeat once CI is green
<BtbN>
If rebasing fails, it'll remove the LGTM label and call it a day on that PR for now
<BtbN>
I think that way we effectively implemented merge trains?
<BtbN>
Forgejo already has support for PRs depending on one another, so that could be taken into account that way
termos has quit [Ping timeout: 252 seconds]
MisterMinister has quit [Ping timeout: 255 seconds]
LaserEyess has quit [Server closed connection]
MisterMinister has joined #ffmpeg-devel
LaserEyess has joined #ffmpeg-devel
LaserEyess has quit [Changing host]
LaserEyess has joined #ffmpeg-devel
<kasper93>
BtbN: you need to order you rebase/merge in the queue
<kasper93>
CI check is not instant, so if anything else is merged in the meantime
<kasper93>
your check will get invalidated
<kasper93>
and you will keep reruning ci
<kasper93>
and fighting for a merge spot
<kasper93>
so basically all PR need to be ordered in FIFO
<kasper93>
and the merge/ci check needs to wait for preview one is merged or rejected
<fjlogger>
[FFmpeg/FFmpeg] Pull request #20234 opened: avcodec/aac/aacdec: dont allow ff_aac_output_configure() allocating a new frame if it has no frame (https://code.ffmpeg.org/FFmpeg/FFmpeg/pulls/20234) by michaelni
kurosu has quit [Quit: Connection closed for inactivity]
Kei_N_ has joined #ffmpeg-devel
Kei_N has quit [Ping timeout: 260 seconds]
zsoltiv has joined #ffmpeg-devel
zsoltiv_ has joined #ffmpeg-devel
<BtbN>
kasper93: I really don't think a few extra CI runs are all that bad, if we in exchange can easily implement something at least akin to merge trains.
<BtbN>
A full proper implementation is much more involved and not something I can spin up in an afternoon
Kimapr_ has quit [Remote host closed the connection]
Kimapr_ has joined #ffmpeg-devel
Kimapr has quit [Remote host closed the connection]
<BtbN>
I see no reason why CI would be wrong here
<Traneptora>
it's failing a CRC for an encoded PNG file
<Traneptora>
the only reason I can think of is that zlib is producing slighty different compressed results on the CI vs locally
<Traneptora>
what version of zlib is running on the CI? Perhaps I can try to reproduce it
<JEEB>
are you locally on zlib-ng?
<Traneptora>
no, locally I'm on zlib
<Traneptora>
1.3.1-2
<JEEB>
weird, I think the only differing thing should have been quite a long time ago IIRC with normie zlib
<Traneptora>
why else would the PNG encoder produce a different CRC on the local vs CI?
<Traneptora>
what environment is the CI?
zsoltiv has quit [Ping timeout: 248 seconds]
zsoltiv_ has quit [Ping timeout: 272 seconds]
<Traneptora>
I wonder if it's clang vs gcc. CI is running gcc, locally I'm using clang-asan-ubsan
<sfan5>
relying on deterministic zlib output is an issue in itself
<ramiro>
does anybody else plan on adding proposals to STF? we're still short of the minimum funding threshold.
microchip_ has quit [Ping timeout: 244 seconds]
<JEEB>
sfan5: yes, I have thought if we should just add an option to the utilized muxer to remove packet and file sizes
<Traneptora>
sfan5: I don't disagree, and it may not even be the issue here. I'm investigating
<JEEB>
since all that matters is when stuff goes through png encoder and decoder, the result is expected as far as the decoded image goes
paulk has quit [Remote host closed the connection]
<ramiro>
iirc jamrial sent a patchset to remove the crc and size from the output, that sounded like a good solution. my proposal was to implement the simplest uncompressed deflate when compression_level is 0, just enough to have tests be deterministic.
kurosu has joined #ffmpeg-devel
<Traneptora>
so I just changed toolchain to gcc instead of clang-ubsan-asan
<Traneptora>
and got a different result
<Traneptora>
but it's not the same value that happened on CI
microchip_ has joined #ffmpeg-devel
microchip_ has quit [Read error: Connection reset by peer]
microchip_ has joined #ffmpeg-devel
<BtbN>
Traneptora: probably whatever Ubuntu 25.10 ships. The image normal CI runs on is pretty much just Ubuntu 25.10 with a bunch of stuff installed via apt
skinkie has quit [Server closed connection]
skinkie has joined #ffmpeg-devel
<Traneptora>
looks like the issue is not zlib
<Traneptora>
I'm getting two different exif chunk payloads depending on which compiler I use
<Traneptora>
that smells like UB, but I'm surprised ubsan didn't catch it
Kimapr has quit [Remote host closed the connection]
Kimapr_ has joined #ffmpeg-devel
Kimapr_ has quit [Ping timeout: 252 seconds]
<indecisiveturtle>
Lynne: Hi, have a small question about using multiple contexts. Switched the encoder to receive_packet like you did for ffv1 but for some reason using more than 1 context makes encoding a little bit slower. Do you remember what cases it was faster?
<BtbN>
Xe: would it be possible to pre-create and add a VOLUME for /data or something in the anubis docker image? I'm trying to set up persistent storage, but when I mount a volume at /data, it's default owned by root, and anubis fails to write there.
<BtbN>
And there is also no shell or anything in the image, so I have no chance to manually fix it
<kasper93>
speaking of
<kasper93>
I check email on my phone sometimes
<kasper93>
(I know real hackers don't do that)
<kasper93>
and when I click on view on forgejo, it sometimes blocks me
<kasper93>
"invalid response"
<kasper93>
some other times it works
<JEEB>
fun
<kasper93>
but it's annoying to be blocked
<JEEB>
and yea, I read my mails on the go as well sometimes
<quietvoid>
i get this on mobile too
<BtbN>
That's what I'm trying to fix
<BtbN>
someone said it's because Forgejo does not persist its private key for the cookies
<BtbN>
It doesn't make sense to me, but I'm trying to configure persistence for that stuff
<kasper93>
ah, thanks
<BtbN>
Like, Desktop Browsers don't seem to have that issue
<BtbN>
So I doubt this is it.
<kasper93>
yes, never had problem on desktop
<BtbN>
On Android, Chrome also always works fine, it's only Firefox
<kasper93>
same here
<BtbN>
So there's a good chance Firefox just is broken
TheVibeCoder has quit [Quit: leaving]
<kasper93>
anyone interested in reviewing #20226 it cleans a bit the css.min.gz generation, I pulled this from softworkz branch posted a while back on ML
<kasper93>
lgtm, but I can't approve myself
ngaullier has joined #ffmpeg-devel
<BtbN>
new anubis config is in place, maybe it's better now
<kasper93>
I will let you know if I see it again
ngaullie has quit [Ping timeout: 252 seconds]
<kasper93>
though it was working fine recently already
<BtbN>
It at least should re-prompt less often now
<BtbN>
Still giving me "Invalid response" on Android
<JEEB>
kasper93: I'm looking at the diff and I've got tons of questions which makes me feel like a dum-dum
<kasper93>
yeah, it's not easy to look at
<BtbN>
I think I reviwed those before on the ML
<kasper93>
this is v8 btw
<BtbN>
iirc since that patchset, our bin2c has gained native compression support?
<BtbN>
so the GZIP step might be unneccesary
<JEEB>
so previously the css and html output files were in RESOURCEOBJS, which is now removed from whatever the SECONDARY target is
<JEEB>
(and removed altogether)
<BtbN>
they were added to the _wrong_ .SECONDARY
<BtbN>
the patches move them to the correct place
<BtbN>
Adding them where they currently are achieves nothing
<JEEB>
right, resources/Makefile is its own thing
<BtbN>
ah, no. The "zlib support for bin2c" patch never landed
<BtbN>
so the set should be good to go
<kasper93>
I was just about to check that, you saved me few clicks ;p
<JEEB>
technically the comment removal in resources/Makefile is an unrelated change in the first commit, which would gain softworkz +1 cleanup commit in the grand calculation.
<kasper93>
I can restore comments, I think they are not that bad
<BtbN>
the comment makes sense to remove, since the patch does what it suggests
<JEEB>
oh
<JEEB>
it just did not uncomment .PRECIOUS, so I just assumed it was a leftover of some old way of doing things
<JEEB>
and thus if it was proper to do it some other way, then a "cleanup useless leftover comment" commit would have been a good way of not mangling it within "fix double-build..."
<JEEB>
(as the first commit, for example)
<JEEB>
but yea, I looked at the second diff and I... don't think this is a quick one for me so if you think this is good I'm fine with it
sm2n has quit [Server closed connection]
sm2n has joined #ffmpeg-devel
ccawley2011 has joined #ffmpeg-devel
kasper93_ has joined #ffmpeg-devel
kasper93 is now known as Guest7758
kasper93_ is now known as kasper93
kasper93 has quit [Read error: Connection reset by peer]
Guest7758 has quit [Ping timeout: 255 seconds]
indecisiveturtle has quit [Ping timeout: 244 seconds]
Venemo has quit [Remote host closed the connection]
<kasper93>
undefined reference to `ff_proresdsp_init'
<JEEB>
ah, the random FATE tester provides <3
<JEEB>
finds nicely missed things where something is missing for the build to work
ccawley2011 has joined #ffmpeg-devel
DauntlessOne498 has quit [Read error: Connection reset by peer]
DauntlessOne498 has joined #ffmpeg-devel
ngaullie has joined #ffmpeg-devel
ngaullier has quit [Ping timeout: 260 seconds]
jamrial_ has joined #ffmpeg-devel
jamrial has quit [Read error: Connection reset by peer]
<BtbN>
I really don't understand the Vulkan loader business
<BtbN>
I just wrote a loader for the loader. Because you can't statically link the _loader_
<JEEB>
hilariously enough there is a static build option for the loader, but by default the build system only allows it for macOS (moltenvk)
<BtbN>
It also does not work
<BtbN>
It then has no idea where to find the stuff it's supposed to load, cause it looks in /opt/ffbuild
<JEEB>
at least it worked earlier when patched with that one thing I found from shinchiro's repo. also huh, never had that with my mpv builds, but I haven't rebuilt the vulkan bits in a while
<BtbN>
It's only for AppleOS cause Apple does not support Vulkan
<BtbN>
so you just statically link the loader together with MoltenVK
<JEEB>
yea
<BtbN>
There is no OS drivers for it to possibly load