michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 8.0 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
indecisiveturtle has quit [Ping timeout: 260 seconds]
<BtbN>
kasper93: yeah, I don't see anything left unaccounted for. The only reliable way I found to prevent a crash is a long-ish sleep right before RoUninit
fjlogger has joined #ffmpeg-devel
<BtbN>
The backtrace of the crash does not touch any FFmpeg code. Though in another thread it's in the middle of RoUninit
<BtbN>
But it crashes somewhere in the internals of graphicscapture.dll
<haasn>
I now also hit the bug where inline comments appear on the wrong line when viewing individual commit diffs; though strangely enough they are attached to the correct line in the summary / discussion tab
<JEEB>
Traneptora: finally got to responding to your question :)
<Traneptora>
JEEB: my question was because it was in movenc.c
<Traneptora>
is movenc.c like mov.c where it's actually all of them, or is it just MOV?
paulk has quit [Ping timeout: 260 seconds]
<JEEB>
movenc is all of them
<JEEB>
QTFF is what ISOBMFF was standardized off of, and there are some very miniscule differences in the base format (the reason why audio descriptor is very funkily worded in ISOBMFF)
<JEEB>
I think l-smash's repo contains a listing of the actual differences between QTFF and MP4
<ramiro>
I don't understand the pr_labeler failure in pr-20292. is this what's blocking me from approving it?
<ramiro>
haasn: for the last commit ("fixup formats.c") the commit message could be improved (I don't know if you plan on using it as an actual fixup to another commit)
<kierank>
forgejo ci being crap
<kierank>
it's as if people were saying not to use a beta ci system
paulk has joined #ffmpeg-devel
paulk has joined #ffmpeg-devel
_av500_ has quit [Remote host closed the connection]
<Lynne>
haasn: your recent lavfi stuff has broken scale_vulkan for debayering
<Lynne>
Impossible to convert between the formats supported by the filter 'graph -1 input from stream 0:0' and the filter 'auto_scale_0' Link 'graph -1 input from stream 0:0.default' -> 'auto_scale_0.default': src: bayer_rggb16le dst: bayer_rggb16le
<haasn>
Do you know which commit?
<Lynne>
no, sometime between when 8.0 was tagged and now
<Lynne>
-vf hwdownload,format=bayer_rggb16le fails wit hthe same error too
<haasn>
FWIW, this error message is misleading and it's on my TODO list to improve; the error here is some other negotiation property besides the format but it only prints the formats on error currently
<haasn>
was hoping to do it after the AVFrame.alpha PR lands
<haasn>
(still soliciting reviewers for that)
<BtbN>
kasper93: you mean to "close" the closable? I call Close right before it.
<kasper93>
I mean not to close, because this is release job
<BtbN>
Hm, then I don't see an issue. I could use the generic close helper there, true. But it should not cause any issues not to, I never re-use the handle.
<kasper93>
close is not refcounted
<kasper93>
iunknown release is refcounted
<kasper93>
if you close something that has internal reference, things may go wrong
<BtbN>
Closing it is how you stop capturing
<BtbN>
you then still need to Release the handle you got to it
GewoonLeon has quit [Ping timeout: 248 seconds]
<BtbN>
I'm not sure I fully follow. But if you QueryInterface something, the handle it returns has its ref-count increase, so you need to Release it once again
<BtbN>
Even if it's technically a handle to the same object
<Lynne>
haasn: something's definitely off with the colortemperature filter though
ngaullier has joined #ffmpeg-devel
microlappy has joined #ffmpeg-devel
<Lynne>
ffmpeg -i <something> -vf format=yuv420p,colortemperature=5500 -c:v rawvideo -y test.nut results in a bgr0 output, but the output is anything but bgr0
<Lynne>
it seems to be outputting yuv420p but the pixfmt is bgr0
GewoonLeon has joined #ffmpeg-devel
<Lynne>
so mpv segfaults
<Lynne>
and ffmpeg.c outputs broken files
<jamrial>
Lynne: that filter doesn't accept yuv420p as input, so a scaler will be autoinserted before it in the filterchain
<Lynne>
could the scaler be misconfigured or something?
<jamrial>
and it will autoselect rgb0
<jamrial>
no, why? i tried and the output is not broken
<Lynne>
weird, its broken for me
<Lynne>
oh well
microlappy has quit [Remote host closed the connection]
<jamrial>
what is this ohcodec openharmony stuff?
<JEEB>
huawei's cross-kernel (linux and own kernel) OS libraries I presume
<JEEB>
not sure how much of it is open at all in reality
<BtbN>
The code seems to be fully open. But obviously not the hardware
<JEEB>
is the non-linux kernels stuff open and bootable in A VM or so?
<jamrial>
why was a decoder added using it instead of a hwaccel?
<jamrial>
i thought we stopped doing the former
<JEEB>
we should have at least
<JEEB>
I wouldn't be surprised if the chinese community posted and pulled it in
<BtbN>
I guess it depends if the hardward/library is even able to act as hwaccel
mkver has joined #ffmpeg-devel
ngaullier has quit [Ping timeout: 258 seconds]
ngaullier has joined #ffmpeg-devel
<BtbN>
Do we have any guarantee that all my filter functions are always called from the same thread?
<BtbN>
Cause WinRT seems to be rather peculiar with its internal event loop, which it somehow associated to the thread you call RoInitialize on
<kasper93>
BtbN: RoInitialize is initializing per thread
<BtbN>
Well, so what happens if the filters request_frame function is then called on a different thread? I don't think the API forbids this
<BtbN>
this basically means I _HAVE_ to put any and all winrt interactions into my own thread, and somehow get frames out of it
<kasper93>
> Use the RoInitialize function to initialize a thread in the Windows Runtime. All threads that activate and interact with Windows Runtime objects must be initialized prior to calling into the Windows Runtime.
<BtbN>
Well, so that means implementing Graphics.Capture in FFmpeg is impossible
<BtbN>
cause we have no way to impose that requirement on callers
<BtbN>
The FFmpeg api says "as long as nothing accesses a context from two threads in parallel, you're good"
ngaullier has quit [Ping timeout: 260 seconds]
<BtbN>
So a graphics capture based filter would only every work by chance
<BtbN>
and it would likely also explode if someone tried to create a second one from the same thread
<BtbN>
the hell kinda wonky API is this
<kasper93>
just need to call init per thread once, it's not impossible
<kasper93>
also users doesn't need to interact with this api
<BtbN>
No, that's not how it works
<kasper93>
only ffmpeg does
<BtbN>
Users interact with the FFMpeg api
<BtbN>
and if the FFmpeg api suddenly demands that you cannot switch thread for this one filter, there is no good way to communicate it
<kasper93>
ffmpeg api is not related to winrt at all
<BtbN>
It would suddenly become if it uses WinRT stuff like this
<BtbN>
Cause you need to RoInitialize the thread, which only works once
<BtbN>
if you call it a second time on the same thread, it fails
<kasper93>
it doesn't fail
<BtbN>
and then also create a dispatcher queue for the thread, which you can only do once or it fails
<kasper93>
it returns S_FALSE which is success
<kasper93>
for already initualized
<BtbN>
And then safe uninit becomes virtually impossible as well
<kasper93>
also there is dozen solutions to provide thread safety in ffmpeg
<BtbN>
cause you would somehow need to know if another filter uses this thread, or the user themselves do winrt stuff on it
<BtbN>
You don't seem to understand the problem
<BtbN>
This entire API uses state local to the current thread everywhere
<BtbN>
it assumes there is a Dispatcherqueue event loop running for it somewhere
<BtbN>
and all works is done inside of it
<kasper93>
you, and you control what IS current thread
<BtbN>
No I don't
<BtbN>
I get called on some thread out of my control
<kasper93>
you cannot create threads in ffmpeg?
<kasper93>
thats a new one
<BtbN>
That doesn't help me get frames to the user
<BtbN>
I eventually HAVE to call functions on the thread the request_frame function is called on
<kasper93>
frames are on d3d11 context it's not winrt
<BtbN>
No, they use a WinRT specific D3D11 API
<BtbN>
The frame that comes out of there is a IDirect3DSurface
<BtbN>
which is a WinRT API
<BtbN>
There is just no way to get a frame out of there on a thread that's not "The WinRT thread"
<BtbN>
so if the user calls request_frame on another thread, it's just impossible
Flat_ has joined #ffmpeg-devel
<BtbN>
I also cannot just copy it to a D3D11 texture in a background thread
Flat has quit [Ping timeout: 248 seconds]
<BtbN>
since that would have to happen on the HWDeciveContexts D3D11 Context, and interacting with a D3D11 Device Context is not thread safe
<BtbN>
The more I look at it, the fewer options I see. I think it might legitimately be impossible to implement this in a library
sm2n_ has joined #ffmpeg-devel
haasn_ has joined #ffmpeg-devel
mindfreeze_ has joined #ffmpeg-devel
srikanth- has joined #ffmpeg-devel
sfan5_ has joined #ffmpeg-devel
vjaquez- has joined #ffmpeg-devel
tortoise_ has joined #ffmpeg-devel
Labnan- has joined #ffmpeg-devel
mindfreeze has quit [Ping timeout: 248 seconds]
tortoise has quit [Ping timeout: 248 seconds]
rcombs has quit [Ping timeout: 248 seconds]
sm2n has quit [Ping timeout: 248 seconds]
srikanth has quit [Ping timeout: 248 seconds]
jdarnley has quit [Read error: Connection reset by peer]
nevcairiel has quit [Read error: Connection reset by peer]
dlb76 has quit [Write error: error:80000068:system library::Connection reset by peer]
jannau has quit [Ping timeout: 248 seconds]
Labnan has quit [Ping timeout: 248 seconds]
lexano has quit [Ping timeout: 248 seconds]
keith has quit [Ping timeout: 248 seconds]
vjaquez has quit [Ping timeout: 248 seconds]
haasn has quit [Ping timeout: 248 seconds]
tortoise_ is now known as tortoise
vjaquez- is now known as vjaquez
mindfreeze_ is now known as mindfreeze
haasn_ is now known as haasn
dlb76 has joined #ffmpeg-devel
jannau has joined #ffmpeg-devel
AMM has quit [Read error: Connection reset by peer]
<kasper93>
IDirect3DSurface is only interop to get IDXGISurface which is not winrt. User is not concerned about winrt if it would wrapped inside.
<BtbN>
But I can only get it on the WinRT thread, since I very much do get it from the WinRT graphics capture APIs
<BtbN>
And I don't see a way to get it out of the thread
<BtbN>
Only thing I can hope is that the FrameQueue, which can be created in a FreeThreaded way, is cool with being called on another thread
<BtbN>
then the request_frame callback can work as usual, and everything else runs in a background thread
<haasn>
hurrah, new libswscale landed
<haasn>
still no subsampled support, that's for STF 2025 :)
<haasn>
what is the expected behavior of acrossfade if one input is shorter than the crossfade duration?
<wbs>
haasn: so what's the status of the new swscale; does all swscale use the new backend stuff now, or are both variants in place side by side?
<wbs>
i.e. did things get faster/slower, and do other architectures need to fill in new asm now, etc?
<haasn>
wbs: currently need -sws_flags unstable
<haasn>
most things should be faster on x86, other archs might be slower
<haasn>
ramiro: is working on a NEON backend
<wbs>
nice
<haasn>
both variants are in place side by side for the time being, but I highly encourage testing and reporting bugs and speed regressions
<haasn>
expect not bit-exact output
<wbs>
is it covered by fate tests?
<haasn>
not directly at the moment; would require updating all fate tests due to not bit exact dither patterns
<wbs>
does it rely on compiler vectorization for decent perf, or does the x86 asm cover that entirely?
<haasn>
it does not rely on compiler vectorization for x86
<haasn>
it does rely on it for other platforms but the performance gain is limited because I decided to simplify the C backend to serve as more of a reference
<haasn>
rather than worrying about the performance of it
<haasn>
of course, most parts of swscale that this is replacing are also written in C :)
<haasn>
since atm it basically just covers the unscaled special converters, which are all/mostly C code
<wbs>
ok; and did I understand/remember correctly that the asm is mostly small snippets for various pieces of kernels - not needing to write 100 different conversion functions for each arch, like in old swscale?
<haasn>
scaling still goes through the old asm
<haasn>
yeah
<wbs>
that's very nice
<haasn>
well, you now need to write 100 different asm snippets; but they will combine to support new formats for free :)
<haasn>
a case study: bgrp10msbbe etc were recently added and are currently broken in swscale
<haasn>
the total effort to add it to the new backend was about 30 seconds of adding a new case label to two functions in swscale/formats.c
<haasn>
just defining the component order (B, G, R) and the shift (6)
<haasn>
between A) error out; B) crossfade over a shorter duration; and C) continue producing broken output; which would you prefer? re: acrossfade and short input files
<haasn>
I'm leaning B > A > C
s55 has quit [Quit: Bye]
s55 has joined #ffmpeg-devel
damithag has joined #ffmpeg-devel
<Compn>
wont you upset people by producing broken output
<ramiro>
hopefully the proposal is accepted for stf and I can focus on making it work for non-jit, and then later get back to jit with a more c-friendly library.
damithag has quit [Ping timeout: 258 seconds]
ngaullier has quit [Remote host closed the connection]
<BtbN>
Is there something I need to do to compile a .cpp file in FFmpeg? It does invoke g++, but it passes it a bunch of unsupported options. I don't fully understand how decklink gets away with it.
<BtbN>
Like, it passes -std=c17 to g++
GewoonLeon has quit [Ping timeout: 248 seconds]
<BtbN>
It's especially unhappy that /usr/lib/gcc/x86_64-w64-mingw32/14/include/g++-v14/x86_64-w64-mingw32/bits/c++config.h:670:2: warning: #warning "__STRICT_ANSI__ seems to have been undefined; this is not supported"
<BtbN>
And yeah, configure really does undefine that in cppflags