michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 8.0 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
<Lynne>
jkqxz: RADV_PERFTEST=video_decode
<Lynne>
though make sure your mesa is compiled with all video codecs, since by default they cut out h264 and hevc, because redhat paranoia
<Traneptora>
> Issue #20462 opened: Home Questions Unanswered AI Assist Labs Tags Saves Chat Users Companies Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Looking for your Teams? FFmpeg libmp3lame conversion from WhatsApp .opus c (https://code.ffmpeg.org/FFmpeg/FFmpeg/issues/20462) by cb2981
<Traneptora>
what the lol
<ePirat>
Traneptora, I guess copy paste mistake? :D
<Traneptora>
yea, but like, why was that in your clipboard
<Traneptora>
(doesn't actually explain why the user is doing something programmatically but won't say what language)
<fjlogger>
[FFmpeg/FFmpeg] Issue #20474 opened: FFmpeg Version 8 Encryption/Decryption Incompatible with version 7 using cenc-aes-ctr encryption scheme (https://code.ffmpeg.org/FFmpeg/FFmpeg/issues/20474) by deif
<BtbN>
wbs: when you made the review comment that was instantly wrong, did you have the tab for it open for a long time? We're currently trying to pinpoint down that bug, and so far the running theory was that it happens when the PR gets pushed while someone is already writing a review against an older commit.
<BtbN>
But in your case there was no push anywhere close to the comment
<Traneptora>
e.g. if (!memcmp(foo, "bar", sizeof("bar")) iirc is semantically equivalent to if (!strcmp(foo, "bar")) but I'm wondering if there's a reason to use one or the other
<Traneptora>
since both will only return true if foo exactly equals "bar" in the first 4 bytes and false otherwise
<BtbN>
like, where'd you get the length for strncmp from anyway? strlen() on argv, which will fail the same way if it somehow ends up not null terminated.
<jamrial>
i am pissed at how useless the aes_ctr test was
<JEEB>
so it even had a FATE test of sorts?
<jamrial>
yes
<jamrial>
but it only did an encrypt -> decrypt without bothering if the encrypted output was correct
<nevcairiel>
presumably if it round trips was good enough =P
<jamrial>
i extended to make it actually ensure the encrypted output was good, but that was not enough as shown by the fact it should have tested state preservation too
GewoonLeon has quit [Ping timeout: 260 seconds]
Kimapr_ has joined #ffmpeg-devel
Kimapr has quit [Remote host closed the connection]
BradleyS has quit [Read error: Connection reset by peer]
BradleyS has joined #ffmpeg-devel
minimal has joined #ffmpeg-devel
NullSound has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
<BtbN>
nevcairiel: I found where the limitation comes from: https://code.ffmpeg.org/FFmpeg/FFmpeg/commit/4f78711f9c28b11dae4e4b96be46b6b4925eb8c6 it claims you cannot make array textures that are render targets? But there is special structs to bind array textures as render targets? This doesn't sound right, I also can't find any references for that online.
kimapr__ has joined #ffmpeg-devel
Kimapr_ has quit [Remote host closed the connection]
<nevcairiel>
i'm not aware of that limitation
<BtbN>
The default seems to be to not use an array texture anyway, and AMF seemingly can only consume array textures
<nevcairiel>
isnt that for 12
<BtbN>
Well, people are also saying AMF is failing to consome the frames from my capture filters
<BtbN>
and they're both not outputting array textures
<nevcairiel>
i see
<BtbN>
so I highly suspect it has the same limitation for d3d11
<BtbN>
I'm not set up to test AMF though
<BtbN>
Is there even a way for a user to control the initial pool size from CLI?
<BtbN>
I'm just creating the hwframes ctx internally after all
<nevcairiel>
I know that technically d3d11va decode can work with both array and plain, but plain is wildly undocumented and never used. There is a flag in the decoder information to indicate if the driver/hardware supports it, never actually checked if its set anywhere since I didnt care :P
<BtbN>
I'm just thinking how to cater to AMF here
<BtbN>
Patch the d3d11va hwcontext to do away with that "disable array texture if render target", and set an initial pool size and hope it's enough?
<BtbN>
Add an option to the filter to make it do that?
<BtbN>
the later seems rather unintuitive
<nevcairiel>
the whole shared pool business is a bit stupid anyway, its neat for efficiency if it works, but if you constantly need to twiddle with the pool size because every component wants X frames to work with, maybe they should just get their own and a copy
<BtbN>
I could coerce it via extra_hw_frames
<BtbN>
if it's >0, add 5 to it and use it as initial pool size
<nevcairiel>
i think extra_hw_frames is the only user parameter to influence that
<BtbN>
well, my filters will still need to honor it
<ramiro>
mkver: I'm trying to implement slice decoding for mjpeg using restart markers. how do you think would be the best way to split MJpegSliceContext out of MJpegDecodeContext? for mjpeg_decode_scan() itself, each slice would need to have mb_start, mb_end, gb, restart_count, and a block. the rest of the parameters could be read-only. In Mpeg12SliceContext, we still keep entire copies of MPVContext, which
<ramiro>
seems kind of wasteful and not very clean to me.
<BtbN>
nevcairiel: I just tried it with a 10 item array texture, and it's happily rendering into it, and creating it.
<BtbN>
So I do not understand what it's going on about
<nevcairiel>
Like I said I was never aware of such a problem, maybe its some kind of driver quirk with some systems
<BtbN>
Intel calling it a Microsoft limitation claims otherwise. Really strange.
<BtbN>
I'm tempted to just propose removing the limit, and then seeing who compalins
<BtbN>
lol, I picked a pool size of 10. And it works fine. But when I move the mouse -> FPS spike -> runs out of surfaces
<nevcairiel>
because its dynamically scaling the fps on movement?
<BtbN>
gfxcapture is full VFR
<BtbN>
it outputs a frame whenever the compositor gives it one
<BtbN>
they come with a timestamp attached
<BtbN>
so rapidly moving the mouse can make it produce hundreds of FPS
<BtbN>
maaaybe I should set the default cap to 60, down from the max of 1000
<nevcairiel>
as long as its configurable
<BtbN>
You can set a max interval on the API
<BtbN>
or min interval, rather
<BtbN>
but beyond that, it will throw frames at you
<BtbN>
if you want a steady FPS, put an fps filter behind it
<mkver>
ramiro: IIRC you are wrong about the rest of the parameters; a new huffman table can be transmitted in between scans, so this would need to be in your slice context, too.
<mkver>
And yes, MPVContext still needs to be broken up further.
<ramiro>
mkver: between scans, yes, but not inside a call to mjpeg_decode_scan().
<ramiro>
I'm thinking of having typedef struct MJpegSliceContext { struct MJpegDecodeContext *s; GetBitContext gb; int restart_count; int mb_start; int mb_end; } MJpegSliceContext;, where MJpegDecodeContext has a field MJpegSliceContext slice_contexts[MAX_THREADS];
<mkver>
ramiro: The pointer to MJpegDecodeContext should be const.
linkmauve has left #ffmpeg-devel [Disconnected: closed]
<ramiro>
mkver: oh, right. thanks for spotting. do you see any issue with that configuration and the extra indirection to get s, or does it look good enough?
linkmauve has joined #ffmpeg-devel
<mkver>
The extra indirection could cost a register; hopefully it doesn't affect single-threaded performance.
<mkver>
(I wondered whether it would be better to use a GetBitContext on the stack in mjpeg_decode_scan() and mjpeg_decode_scan_progressive_ac() and use a GetByteContext everywhere else, as JPEG is mostly byte-aligned.
<ramiro>
mkver: I'll have a look at that as well. thanks!