michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 7.1.1 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
<haasn>
ramiro: how realistic do you think upstreaming asmjit is?
<haasn>
with the possibility of swscale-v2 being merged looming on the horizon we may want to start thinking seriously about how we're gonna tackle ARM support
Traneptora has joined #ffmpeg-devel
Anthony_ZO has quit [Ping timeout: 276 seconds]
cone-785 has joined #ffmpeg-devel
<cone-785>
ffmpeg Link Mauve master:d5f4a5512339: avutil/hwcontext_vulkan: Query the correct format
<cone-785>
ffmpeg Max Rudolph master:9537d91e8f1e: avformat/cinedec: add support for additional Bayer CFA patterns for Phantom CINE file format
<cone-785>
ffmpeg Michael Niedermayer master:8c920c4c3961: Remove libpostproc
<cone-785>
ffmpeg Michael Niedermayer master:1b643e3f65d7: tests/fate/filter-video: Fix dependancy for codecview
<jamrial>
michaelni: if the intention is to include libpostproc in 8.0, why push that patch now before the branch is created?
<jamrial>
or you meant including it as a "plugin"?
<michaelni>
I think its a good opertunity to test the concept of a source plugin
<michaelni>
also contracts with STF say 7th may as deadline
<kierank>
lol
toots5446 has joined #ffmpeg-devel
<toots5446>
What's the meaning of PARSER_FLAG_COMPLETE_FRAMES and how does it relate to AVSTREAM_PARSE_HEADERS ?
microlappy has joined #ffmpeg-devel
<jamrial>
toots5446: the latter is a flag set by demuxers to setup parsers with the former
<jamrial>
and its purpose is letting the parser know it will be getting entire codec-specific packets instead of chunks broken at arbitrary points, so no packetization will be required, only header parsing
microlappy has quit [Quit: Konversation terminated!]
microlappy has joined #ffmpeg-devel
<jamrial>
the AVStreamParseType enum should probably not be public, for that matter. the relevant AVStream field is internal
microlappy has quit [Quit: Konversation terminated!]
<toots5446>
jamrial: Thanks. I need a mechanism to tell libavcodec/vorbisdec.c to reinitialize some decoding values based on packets from processed in libavformat/oggparsevorbis.c
microlappy has joined #ffmpeg-devel
<toots5446>
Any existing mechanism I could tap into? My first approach was to add a packet flag..
\\Mr-C\\ has quit [Remote host closed the connection]
microlappy has quit [Client Quit]
<jamrial>
depends on what kind of information is signaled by the container. it could be done as side data
<toots5446>
After parsing the header and setup packet in a secondary chained ogg/vorbis stream, the context is ready for the decoder to re-initialize.
<toots5446>
Currently, this is done by receiving the actual header packets in the decoder but we're suppressing them
<toots5446>
So I was thinking of adding a AV_PKT_FLAG_REINIT to the first data packet after those headers.
<toots5446>
would that make sense?
<jamrial>
why are the header packets being supressed?
<toots5446>
That's part of the work to support chained ogg bitstreams, according to Lynne, header packets should never be passed out of the demuxer.
<Lynne>
what's wrong with AV_PKT_DATA_NEW_EXTRADATA?
<toots5446>
You know what
<toots5446>
yeah lol just went back to the original thread and found the same answer haha
<Lynne>
most of the boilerplate math is implemented
<Lynne>
JEEB: not a lot
<JEEB>
ok
<Lynne>
usac, right?
<JEEB>
USAC and others (since the whole thing is split into X specs and all)
<Lynne>
I'm working on mps212 which should decode quite a bit more, as the fraunhofer decoder uses it at low rates
<JEEB>
ok
<Lynne>
DRC isn't implemented yet, but more critically, it isn't implemented yet
<Lynne>
which means that it can't malfunction and unleash a screamer :)
<JEEB>
yea, MPEG-D DRC and what that other thing was... surround
<Lynne>
mpeg-d?
<JEEB>
yea the DRC stuff has its own split methinks
<Lynne>
is that one of the SIX OTHER WAYS that loudness can be signalled in aac?
<JEEB>
it's the thing that I shared IIRC, it has the XYZ-XYZ number as well as the group name (like 14496 is MPEG-4, and the thing with HEVC and the new audio format is MPEG-H)
<Lynne>
which leads me to my second question, what sort of music *do* they listen to that can't be listened to in any other codec?
<nevcairiel>
dont question the motivation of audiophooles
<nevcairiel>
it only leads to despair
<Lynne>
I thought I understood them fairly well, having built and designed a tube amp many years ago
<JEEB>
seems to mostly just be testing of this person's encoder as he's a community member
<Lynne>
blinky lights == good
<JEEB>
or at least I want to keep it on that level :D
TheVibeCoder has joined #ffmpeg-devel
<TheVibeCoder>
anybody researched that strange ec3 sample decoding bug?
<TheVibeCoder>
paying 48.000 EUR to fix this bug
mkver has quit [Ping timeout: 244 seconds]
TheVibeCoder has quit [Quit: Client closed]
paulk has quit [Ping timeout: 272 seconds]
paulk has joined #ffmpeg-devel
microlappy has joined #ffmpeg-devel
TheVibeCoder has joined #ffmpeg-devel
microlappy has quit [Quit: Konversation terminated!]
mkver has joined #ffmpeg-devel
<ramiro>
haasn: I don't think it's realistic to upstream neon asmjit in the near future
<ramiro>
I'm not a huge fan of asmjit itself, since it's c++, but it's the best I could find to quickly and easily test jit.
<ramiro>
I have a forked build of asmjit with a few bugfixes and added functionality, so no distro currently would be able to use it as-is.
<ramiro>
what I think is realistic in the short term is to use the code I'm writing to generate assembly files that would function similar to your x86 backend
<ramiro>
it would be slower than full jit, but it would be similar to the speed penalty the x86 code has as well, and it wouldn't require an added dependency
<ramiro>
those files would be generated by asmjit, but added to the source repository. this would prevent us from having to write the code twice, for asmjit and standalone assembly files.
<ramiro>
btw, I think the concept above could be used for the x86 backend as well. convert your code to be runtime generated, and use that as a base to create the assembly files that you currently have. this would make it easier to move to jit on x86 when the opportunity arises
<ramiro>
what I would really like to do is write a runtime assembler in C, which would be tailored to small kernels like we have in swscale
<ramiro>
then we could use that runtime assembler either to jit, or to generate the assembly files at build time
<kierank>
18:56:26 <• Lynne> I think it would be up to them to decide
<kierank>
I don't think that's how it works
<ramiro>
haasn: also, for systems that don't support jit, we could pre-generate input and output functions to an intermediate format. this way we wouldn't need to generate all combinations. they wouldn't all be as fast as they could be, but they would still be faster than using the cfp approach, which adds significant overhead.
Traneptora has quit [Quit: Quit]
<kierank>
Lynne: I concluded raptorq is impractical for multimedia
<TheVibeCoder>
why they should listen to you
<kierank>
As I understand you send symbols instead of data
TheVibeCoder has quit [Quit: Client closed]
<kierank>
And need enough symbols
TheVibeCoder has joined #ffmpeg-devel
<kierank>
But if you don't have enough symbols you get basically nothing
<kierank>
Instead of a smooth degradation
<TheVibeCoder>
what is raptorq really?
<kierank>
Forward error correction system
<TheVibeCoder>
low level protocol stuff?
<kierank>
Yes
<kierank>
I personally think impractical for multimedia
<kierank>
But I might be wrong
<TheVibeCoder>
make it part of libavformat for protocol stuff?
<kierank>
What protocol
<kierank>
It's udp
<TheVibeCoder>
lol
<kierank>
Difficult to do well in a format without timers
<TheVibeCoder>
just replace libavformat protocols with libcurl?
<TheVibeCoder>
because http(s) sucks in libavformat
<kierank>
Protocols are hard
<kierank>
That's why it's laughable to suggest ffmpeg is gonna get videoconferencing
<TheVibeCoder>
skype is dead
<TheVibeCoder>
ffcall is risen
<kierank>
Very sad Skype is dead
<kierank>
Teams doesn't work for me
<kierank>
Grok says raptorq can do partial recovery
<TheVibeCoder>
i trust more celebrity news than fake ai
<TheVibeCoder>
The recovery properties of the RaptorQ decoder are exceptional
<Lynne>
it can do full recovery, its just a nicer packaging and specification of raptor codes
<Lynne>
its literally like any other FEC
<Lynne>
so I'm not sure how you got the impression that it's not usable in multimedia
<TheVibeCoder>
kierank: what you use instead of raptorq now in your products ?
<kierank>
Xor fec and srt
<kierank>
Lynne: in live, in a packet loss scenario, how do you know when to start
<kierank>
To produce a continuous output
<kierank>
If it's just a bunch of symbols
<kierank>
In xor and ldpc you know the matrix size
<kierank>
So you can make the delay N matrices
<TheVibeCoder>
i think raptorq have similar possibility / features. it must have
<kierank>
The whole point of raptorq is there are no matrices
<kierank>
For file it kind of makes sense but for live with fixed latency when do you start
<Lynne>
kierank: there are no explicit matrices, but data is organized in blocks instead
<kierank>
How many packets are in a block
<kierank>
And do you basically start outputting data after N blocks
<Lynne>
non-fixed amount sent during configuration
<TheVibeCoder>
Lynne: did you have abandoned your av protocol for raptorq ?
<Lynne>
its not abandoned, I still hack on it
<Lynne>
yes, I wrote the specifications with raptorq FEC in mind
<kierank>
Where is the block size signalled
<TheVibeCoder>
ask grok
<kierank>
In xor it's signalled in the fec and you estimate the latency
<kierank>
And then output based on a fixed latency (not possible in FFmpeg without timers)
<kierank>
But in raptorq how do you make that decision
<kierank>
TheVibeCoder: grok doesn't get it
<TheVibeCoder>
other llms ?
<TheVibeCoder>
sonet, claude, gemini, whatever
<kierank>
Grok basically says "out of band"
<TheVibeCoder>
chat with copilot
<TheVibeCoder>
i'm little scared because yandex search results are becoming more useless like google ones
<TheVibeCoder>
when i need real PDFs about DSP algorithms and not crap and stupid adds
<TheVibeCoder>
also google likes to censor results and also to think results are irrelevant when in fact 'irrelevant' results are more relevant than top search results
<BtbN>
The whole freetype/harfbuzz situation is a mess, my god
<BtbN>
harfbuzz depends on freetype, which depends on harfbuzz
<BtbN>
Building this mess isn't so bad, but then linking against the resulting libraries seems literally impossible
<BtbN>
no matter which way they're ordered, one of them is missing symbols
<Lynne>
its a standardized packet that anyone using raptorq should implement
<TheVibeCoder>
Please stop the planet, I want to get off. I've had enough.
<Lynne>
it does happen out of band, indeed, so if you miss it, you'll have no way of interpreting the info
<kierank>
TheVibeCoder: why
<TheVibeCoder>
kierank, unrelated to this channel
<Lynne>
I don't know how you could assume that AIs would know enough about raptorq, its not popular enough yet to have enough text for LLMs to digest and parrot
<TheVibeCoder>
AIs know everything and nothing about ffmpeg
TheVibeCoder has quit [Quit: Client closed]
<compn>
kierank, yandex (and google) used to give results based on your search terms not what the search engine thought you were looking for
<compn>
yandex was the last holdout. but now its like google and bing. it just ignores your search terms and gives you whatever
<compn>
at least yandex still respects +operator
<Lynne>
yandex hasn't changed at all for me, though