michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
iive has quit [Quit: They came for me...]
System_Error has quit [Ping timeout: 244 seconds]
aaabbb has joined #ffmpeg-devel
System_Error has joined #ffmpeg-devel
___nick___ has joined #ffmpeg-devel
___nick___ has quit [Client Quit]
___nick___ has joined #ffmpeg-devel
rvalue has quit [Ping timeout: 248 seconds]
rvalue has joined #ffmpeg-devel
pelotron has quit [Quit: ~pelotron()]
pelotron has joined #ffmpeg-devel
jamrial has quit []
Martchus has joined #ffmpeg-devel
Martchus_ has quit [Ping timeout: 272 seconds]
minimal has quit [Quit: Leaving]
cone-223 has joined #ffmpeg-devel
<cone-223>
ffmpeg Lynne master:a9b2c10eee9c: hwcontext_vulkan: use host image copy
<cone-223>
ffmpeg averne master:f604d1093f05: vulkan/ffv1dec: fix leak in FFVulkanDecodeShared
<haasn>
ramiro: probably you won't see this until you're back but I finished my split_planes branch
meego has quit [Remote host closed the connection]
<haasn>
one of the major changes I had to make for the architecture is that I introduced an SWS_OP_ASSUME pseudo-op which just carries value range metadata
<haasn>
we can insert this at the correct place in the pixfmt decode pipeline (after SWAP_BYTES, SWIZZLE and RSHIFT)
<haasn>
it gets eliminated again during optimization so no back-end will ever see it
<haasn>
but it does make the value range tracking _much_ cleaner than it was before
<haasn>
(and, of course, we could have a flag to omit this assumption op, like SWS_ALLOW_OUT_OF_RANGE or whatever)
<haasn>
(which would then ensure that we properly value range clamp any illegal values that may occur)
<haasn>
or maybe SWS_STRICT
<haasn>
nice, forgejo shows "%!d() commits" on repos
<haasn>
BtbN: something seems eriously broken there
<BtbN>
seems like valkey hard-locked itself
<BtbN>
restarted it, and stuff is fine again
<BtbN>
Might need to reduce memory allocation for it a little
<fflogger>
[newticket] nicol: Ticket #11633 ([avfilter] Loading more than 2 files in the signature filter consumes a lot of memory) created https://trac.ffmpeg.org/ticket/11633
meego has joined #ffmpeg-devel
IndecisiveTurtle has joined #ffmpeg-devel
<meego>
Good morning. I'm very new to working on ffmpeg, and working on SIMD optimization for f_ebur128. I am writing tests, and since EBU R 128 has a full compliance test suite, I'm thinking of implementing the whole suite (70 sample files, 300MB in total uncompressed).
<meego>
I understand that this many files cannot be merged in-repo in tests/ or uploaded to fate-suite. What's the recommended way to deal with these large compliance sample sets?
<TheVibeCoder>
you do tests privately
<TheVibeCoder>
besides filter operates in floats
<JEEB>
meego: we do have various reference test suites as part of FATE already
<meego>
Big shops like Netflix often sponsor things like this. A 200USD/mo VM is a rounding error for them. I know at least one former ffmpeg contributor who works there now
<TheVibeCoder>
ok, contact me when everything is ready to roll on
<TheVibeCoder>
perspective is GPLv2+, zoompan is LGPLv2.1+
<mkver>
300MB is way too much.
emmastrck has joined #ffmpeg-devel
emmastrck has joined #ffmpeg-devel
emmastrck has quit [Changing host]
emmastrck is now known as pshufb
pshufb has quit [Ping timeout: 272 seconds]
<mkver>
Lynne: src/libavutil/vulkan.c:111:16: error: ‘VK_IMAGE_USAGE_HOST_TRANSFER_BIT’ undeclared (first use in this function); did you mean ‘VK_IMAGE_USAGE_HOST_TRANSFER_BIT_EXT’?