ChanServ changed the topic of #ffmpeg to: Welcome to the FFmpeg USER support channel | Development channel: #ffmpeg-devel | Bug reports: https://ffmpeg.org/bugreports.html | Wiki: https://trac.ffmpeg.org/ | This channel is publically logged | FFmpeg 7.1.1 is released
Shuriko has joined #ffmpeg
zsoltiv_ has quit [Read error: Connection reset by peer]
zsoltiv_ has joined #ffmpeg
faxmodem has quit [Quit: so long and thanks for all the fish]
faxmodem has joined #ffmpeg
emmanuelux has quit [Quit: au revoir]
Vonter has quit [Ping timeout: 265 seconds]
chowbok has joined #ffmpeg
ultrazane123 has joined #ffmpeg
ultrazane123 has left #ffmpeg [#ffmpeg]
pastelowl has quit [Quit: WeeChat 4.6.3]
pastelowl has joined #ffmpeg
Shine_ has joined #ffmpeg
Vonter has joined #ffmpeg
Shine_ has quit [Read error: Connection reset by peer]
maxim_d33 has quit [Ping timeout: 248 seconds]
maxim_d33 has joined #ffmpeg
flotwig has quit [Read error: Connection reset by peer]
flotwig has joined #ffmpeg
flotwig has quit [Changing host]
flotwig has joined #ffmpeg
minimal has quit [Quit: Leaving]
_whitelogger has joined #ffmpeg
robobub has joined #ffmpeg
lolok has joined #ffmpeg
System_Error has quit [Remote host closed the connection]
System_Error has joined #ffmpeg
jmcantrell has quit [Ping timeout: 248 seconds]
cantelope has quit [Quit: Connection closed for inactivity]
xx has joined #ffmpeg
CHR0N0S has joined #ffmpeg
[R]x has quit [Ping timeout: 245 seconds]
johnjaye has quit [Ping timeout: 276 seconds]
johnjaye has joined #ffmpeg
Shuriko has quit [Ping timeout: 248 seconds]
lavaball has joined #ffmpeg
etra has quit [Quit: ZNC 1.8.2 - https://znc.in]
etra has joined #ffmpeg
etra has joined #ffmpeg
etra has quit [Changing host]
etra has quit [Client Quit]
rsx has joined #ffmpeg
Shuriko has joined #ffmpeg
termos has quit [Quit: ZNC 1.9.1 - https://znc.in]
Blacker47 has joined #ffmpeg
SuicideShow has quit [Ping timeout: 260 seconds]
SuicideShow has joined #ffmpeg
Sketch has quit [Remote host closed the connection]
Sketch has joined #ffmpeg
termos has joined #ffmpeg
relue has quit [Ping timeout: 268 seconds]
termos has quit [Client Quit]
relue has joined #ffmpeg
termos has joined #ffmpeg
Shine_ has joined #ffmpeg
YUiNA has joined #ffmpeg
YUiNA has quit [Remote host closed the connection]
YUiNA_ has quit [Ping timeout: 265 seconds]
j45_ has joined #ffmpeg
j45 has quit [Ping timeout: 248 seconds]
j45_ is now known as j45
j45 has quit [Changing host]
j45 has joined #ffmpeg
[R]x has joined #ffmpeg
CHR0N0S has quit [Quit: CHR0N0S]
Shuriko has quit [Ping timeout: 276 seconds]
YUiNA has joined #ffmpeg
j45_ has joined #ffmpeg
j45 has quit [Ping timeout: 248 seconds]
j45_ is now known as j45
j45 has quit [Changing host]
j45 has joined #ffmpeg
linkmauve has joined #ffmpeg
m5zs7k has quit [Remote host closed the connection]
m5zs7k has joined #ffmpeg
microchip_ has quit [Quit: There is no spoon!]
microchip_ has joined #ffmpeg
user_oreloznog has quit [Ping timeout: 268 seconds]
user_oreloznog has joined #ffmpeg
Guest89 has joined #ffmpeg
Guest89 has quit [Client Quit]
YUiNA has quit [Remote host closed the connection]
microlappy has joined #ffmpeg
cantelope has joined #ffmpeg
ZLima12 has quit [Ping timeout: 260 seconds]
minimal has joined #ffmpeg
lavaball has quit [Remote host closed the connection]
EmleyMoor has quit [Ping timeout: 252 seconds]
rsx has quit [Quit: rsx]
EmleyMoor has joined #ffmpeg
microlappy has quit [Ping timeout: 248 seconds]
MarioMey has joined #ffmpeg
<MarioMey> Hello, there. I'm using a command that exports jpgs but the -n doesn't work... it constantly overwrite jpg files... what am I doing wrong?
<MarioMey> ffmpeg -n -i 123.mp4 -vf "select='eq(mod(n-5\,8)\,0)',removelogo=mask.png:enable='gte(t,15)'" -vsync vfr -frame_pts 1 export/img_%05d.jpg
<MarioMey> (command applies a removelogo to a part of the video but 1 frame each 8 and writes a jpg with the original frame number in its name)
<MarioMey> (by running this command 8 times, I take advantage of 8 CPU threads)
<MarioMey> ChatGPT says this is a limitation of ffmpeg... that it applies to a "complete file", not individual ones... as jpgs. Is this correct?
mtoy has quit [Ping timeout: 240 seconds]
mtoy has joined #ffmpeg
ZLima12 has joined #ffmpeg
<BtbN> what are you even trying to do?
<BtbN> ffmpeg can absolutely write out a sequence of images, but It's unclear to me what problem you are even encountering
<BtbN> that "-n" there makes no sense to me
<MarioMey> Some minutes ago, I run this command and it outputs several files. I cancel it with Ctrl-C. Now, I want to continue with the frames that are not yet processed. How to know that? Easy: that files are there. If ffmpeg doesn't overwrite those file, then it skips them and starts with the non-processed frames.
<MarioMey> But as `-n` option doesn't work on JPGs, ffmpeg *have to* process those frames again, overwritting those files.
<kepstin> hmm. i would have expected ffmpeg to exit immediately and process _no_ files if you use the -n option
<kepstin> but i guess the way that option is handled in the ffmpeg cli doesn't take into account the way the image2 muxer does file patterns
<MarioMey> kepstin: I expected that ffmpeg skips files already processed and continue with the other ones.
<kepstin> that expectation is incorrect.
<MarioMey> Why?
<MarioMey> AfterEffects and Blender has this option.
<kepstin> the -n option makes ffmpeg exit immediately and stop processing if the output file already exists.
<kepstin> and even if there was a muxer option which did what you want, the way ffmpeg works means it would still have to decode, filter, encode all the "skipped" frames so you wouldn't see much speedup.
<MarioMey> You can put several computers connected to a net, all of them processing the same file and writting JPGs files in the same shared folder. If the file is already there, AE or Blender doesn't render it again, it skips it and continues with other.
<ePirat> generally your usecase is not something ffmpeg cli is tailored to, its usually not use to process and dump simple frames in several different invocations
<kepstin> one thing you can do with ffmpeg is use a seek on the input, and then set -start_number on the output.
<ePirat> s/simple/single/
<kepstin> as a way to continue
<kepstin> but that requires that you calculate the correct seek value yourself
<MarioMey> kepstin: I understand. ffmpeg writes the file when it ends processing the frame. AE/Blender must look the file before rendering.
<MarioMey> ePirat: is very rare the use that I give?
<kepstin> the ffmpeg cli tool is very much a linear processing system. you cue up a bunch of inputs, set up the processing to do, then it runs through from beginning to end.
<ePirat> MarioMey, yes
<kepstin> you can use the ffmpeg libraries in other ways, but the ffmpeg _cli tool_ is not designed for this sort of thing.
Rena has quit [Ping timeout: 265 seconds]
<MarioMey> I'm using removelogo filter and it works at one thread. So, I put my 8 thread at work by running 8 instances of ffmpeg. But I had to stop 4 of them at the middle. When I wanted to continue... I faced this issue.
<MarioMey> Now, I put those 4 instances to work again.
<kepstin> if you want to resume, you can calculate the seek point (-ss input option) appropriate to where you left off, and then set the -start_number muxer option to the next number after the last file written. then ffmpeg will open the file starting at the point you set, and write output from that point on.
<kepstin> honestly why are you writing to separate image files anyways tho? for a big video file that will take a lot more space than using a proper video codec.
<MarioMey> I understand... but think that I had 4 ffmpeg instances that finished their files. So, I had 0001.jpg, 0002.jpg, 0003.jpg, 0004.jpg... and then, 0009.jpg, 0010.jpg, 0011.jpg, 0012.jpg... so, the other instances stopped somewhere in those files...!
<MarioMey> Now, the other 4 instances finished. Now, I will convert those jpgs and audio into a new mkv file.
<kepstin> if you had multiple ffmpeg processes writing in the same directory, then the files you have are probably trash anyways
<kepstin> (it doesn't use atomic writes by default, and all of the ffmpeg processes would have been uselessly duplicating work)
<kepstin> just run a single ffmpeg process which reads one video file, does the processing, and encodes one output. it'll be faster than what you're doing now.
<kepstin> encodes one output video file.
<kepstin> you can run multiple ffmpeg processes working on _different_ video files in parallel just fine
<johnjaye> does ffmpeg do multiple threads by default?
Rena has joined #ffmpeg
<johnjaye> i think when i run an encode all 4 of my cores go up
<kepstin> it depends. some things yes, some things no
<johnjaye> i see
<kepstin> many video decoders and encoders are multithreaded. most (all?) filters are not
<kepstin> with how ffmpeg is typically used, the video encoder is often the slowest part, so the single threaded filters might not be a bottleneck
<kepstin> but some filters are pretty slow, which can be an issue
<kepstin> anyways, the extra work that MarioMey is doing to write to images and re-read them, combined with the fact that they're not actually getting any multi-thread speedup from how they're running ffmpeg, means doing the video decode/filter/encode in one process will be faster than what they're doing now.
<kepstin> despite possibly not using their multi-core cpu that well
<kepstin> removes a generation loss from writing the jpeg images too, so the result will be better quality :)
<MarioMey> kepstin: sorry, but you are wrong. I did it in my way and I get a correct output file. I had complications... it's true, but by running some ffmpeg instances as I did, it really speeds up the process.
<MarioMey> Also... it's more complicated to run several and different ffmpeg commands at the same time... and then, to mux everything...
<kepstin> you got lucky then, that none of the ffmpeg processes ended up writing the same image at the same time and corrupting the file. and it _did not_ speed up the result.
<MarioMey> But, with this technique and in this case, it would speed up to x8.
<kepstin> unless you had set some extra options on the ffmpeg commands to limit each one to process a different part of the file, _all_ of the ffmpeg commands processed the entire file from beginning to end
<MarioMey> No, ffmpeg instances weren't writting the same files!
<kepstin> so it would be the same speed as running one (in fact, if your cpu has turbo, it might be faster to run just one)
<kepstin> how did you configure them to write different files?
<MarioMey> This is the command that process files 1, 9, 17 and so:
<MarioMey> ffmpeg -i 123.mp4 -vf "select='eq(mod(n-1\,8)\,0)',removelogo=mask.png:enable='gte(t,15)'" -vsync vfr -frame_pts 1 export/img_%05d.jpg
<MarioMey> And this is, for example, for frames 2, 10, 18, etc:
<kepstin> ah, and you're using different values in place of 0 in that select filter for the different commands?
<MarioMey> ffmpeg -i 123.mp4 -vf "select='eq(mod(n-4\,8)\,0)',removelogo=mask.png:enable='gte(t,15)'" -vsync vfr -frame_pts 1 export/img_%05d.jpg
<MarioMey> Sorry, that was for 4, 12 and so.
<kepstin> that would work, but it's still not great.
<johnjaye> i've been wondering about the proper way to deal with loss issues from transcoding. right now i just carelessly reencode over and over without thinking about it. in such situations is it best to switch to ffv1 or something else that minimizes loss?
<MarioMey> Yes, it's great! 😁
<kepstin> johnjaye: best option is to simply not transcode
<MarioMey> If I can speed up to x8 and get the same result... isn't great?
<kepstin> MarioMey: a better way to handle this would be to split the input file into chunks (using -c copy to skip re-encoding) with the segment muxer, then run ffmpeg commands separately on each segment in parallel encoding to video files, then use a final ffmpeg with the concat demuxer and -c copy to merge the result files into a single file
<kepstin> the addition of the writing to jpeg and reloading from jpeg later adds a bunch of overhead that makes it slower than 8x compared to doing the full encode in one command
<kepstin> plus you get a generation loss from the jpeg encoding.
<johnjaye> select is a muxer not a filter?
<kepstin> johnjaye: no, it's a filter.
<johnjaye> er segment
<MarioMey> Well, that's another way to do it.
<johnjaye> ok. you called it the segment muxer.
<kepstin> segment is a muxer, yes. it's a way to take one input file and split it into multiple output files based on time, file size, etc.
<MarioMey> Also it's great 😁.
<johnjaye> how does -c copy work with that? it splits on keyframes or something?
<kepstin> segment muxer always runs after the encode, so it always has to split on keyframes. works the same with or without -c copy
<johnjaye> ok. it's listed on the ffmpeg filters page which confused me
<johnjaye> i think i've used -c copy with -ss and -t and gotten strange results sometime
<kepstin> the segment filter is different from the segment muxer
<kepstin> (they do entirely different things)
<kepstin> when you use -c copy with -ss, it should copy starting from nearest keyframe before the seek point you requested. depending on where you have the -t in the command, that might either calculate based on time since the keyframe or time since the seek point.
<johnjaye> so if i just want to split segments based on keyframes i would use the segment muxer?
<johnjaye> i.e. give a segment for every keyframe?
<kepstin> johnjaye: i think you will get that behaviour if you set the segment_time option sufficiently small.
<johnjaye> hrm. that would create a ton of files?
<kepstin> depends on how the video was encoded, but probably yes
<johnjaye> ok. my use case is that i had a short clip, one minute, where a scene was happening, and it kept cutting to the main character watching the scene
<johnjaye> i wanted to clip those parts out but it was super annoying. sometimes it would be for 10 seconds sometimes for 2 seconds and it was at least 5 clips i had to find manually
<kepstin> yep, with normal encoder options on modern encoders the keyframes will be placed at variable times - the encoder picks locations where it's most efficient to add a keyframe
<kepstin> which will usually be on scene cuts or places where there's a lot changing in the frame all at once
<furq> or every five seconds if you got it off youtube
Fiacha has joined #ffmpeg
<kepstin> livestreams almost always use fixed keyframe intervals too, to ensure that short segments can be made at even time intervals and people can connect and pick up the video playback without waiting a long time
<johnjaye> google suggests ffmpeg -i clip.mp4 -vf "select=eq(pict_type\,I),scale=73x41" -vsync vfr -qscale:v 2 thumbnails-%02d.jpeg
<kepstin> what are you trying to do?
<johnjaye> split the file into chunks based on the keyframe changing
<kepstin> well, the command you just pasted certainly doesn't do that
<johnjaye> right but it showed me i think this approach works
<johnjaye> most of the keyframes were on scene changes
<johnjaye> but i don't know exactly how the segment muxer works or if that's even the right solution here
<kepstin> again - the location of keyframes will depend on the input file encoder settings.
<kepstin> so just because it works on one file doesn't mean it'll do what you want on another file :)
<kepstin> but yeah, if you want a separate video file for each gop in the input file, the segment muxer is probably the most straightforwards way to do that.
<kepstin> (a 'gop' or 'group of pictures' being a set of frames starting at a keyframe / idr frame - i.e. a frame which marks a boundary after which no frames can reference data from frames before the boundary - up to the next such frame)
<kepstin> technically, some video formats support "I" frames which are not "IDR" frames, and therefore cannot be used as a seek point or segment cut point.
<johnjaye> i didn't know I and IDR were different
<johnjaye> ah ok. the only flaw in this approach is I have a bunch of files
<johnjaye> so i wind up with output1.mp4, output3.mp4, output13.mp4 say, and i have to concat them
<furq> select=eq(key,1) will get you every keyframe
<johnjaye> if i put output1 through output 30 into a text file the concat muxer fails
<furq> fails how
<kepstin> note that you want the concat _demuxer_ (not "muxer")
<johnjaye> right
<kepstin> one of the options on the segment muxer will have it write an ffconcat playlist file which you could then edit and use as an input to the following ffmpeg command
<johnjaye> furq: i want to split a file into segments of keyframes. or i guess Group of Pictures is the technical term?
<furq> right but you said once you have the segments you can't concat them
<johnjaye> er right
<johnjaye> i'm trying to do the m3u8 thing kepstin said but i don't see an example in the docs
<kepstin> i said ffconcat not m3u8
<johnjaye> er right
<johnjaye> -segment_format_options=ffconcat is not the right syntax. but -ffconcat wasn't either
<kepstin> tends to work better than guessing random things
kasper93 has joined #ffmpeg
<johnjaye> ah i think i have it. it's -segment_list_type ffconcat
<kepstin> yes, or you can just use the '.ffconcat' extension on the -segment_list filename
flotwig has quit [Ping timeout: 260 seconds]
<johnjaye> oh ok. i just realized i needed that too
<kepstin> if you don't set that extension, you can still use the file. it just means you might need to manually specify formats in more places.
<johnjaye> i'm not sure why i didn't see that in the provided examples.
<johnjaye> i think each example was about something else so i didn't see the segment_list option being specified
<kepstin> all but one of the examples have the segment_list option
<kepstin> but the examples are intended to be read along with the docs so you can understand what they're doing
<johnjaye> right. i guess skimming the docs hurt me a lot in this case
foamy has quit [Ping timeout: 244 seconds]
foamy has joined #ffmpeg
softworkz has joined #ffmpeg
lavaball has joined #ffmpeg
<MarioMey> kepstin: how your technique would be? How to cut the file in, for example, 8 chunks?
<MarioMey> The file is an mp4 coded in h264.
<kepstin> I'd split the file into reasonable-length chunks (e.g. several minutes per chunk) with the segment muxer with -c copy, use a script to do a parallel loop over the individual segment files doing decode, filter, encode to new video files, then use the ffmpeg concat demuxer with -c copy to combine the results into one video file.
<kepstin> but i would only do that if the single-threaded filter is the bottleneck. if the video encode (which is already multithreaded) is the bottleneck, there's no real reason to split it up.
<MarioMey> You mean not to split the file into different files and applying the filter to that files, but using --ss and --to to create the files?
<MarioMey> * and applying the filter to them
<MarioMey> And then, concat all the files...
<MarioMey> Because, if I understand correctly, you can't split a file without reencoding wherenever you want... or it is possible?
<MarioMey> Sorry, I'm going again...
<MarioMey> Because, if I understand correctly, you can't split a file whenever you want without reencoding? Or can you?
<MarioMey> Because, if I understand correctly, you can't split a file whenever you want without reencoding... or can you?
<MarioMey> (Sorry my erroneous gramatic!)
<MarioMey> Thanks, furq. I'm checking that.
softworkz has quit [Quit: Leaving]
softworkz has joined #ffmpeg
softworkz has quit [Client Quit]
YUiNA has joined #ffmpeg
Flat has quit [Quit: Rip internet]
Flat has joined #ffmpeg
flotwig has joined #ffmpeg
flotwig has quit [Changing host]
flotwig has joined #ffmpeg
softworkz has joined #ffmpeg
lemourin has quit [Ping timeout: 252 seconds]
Fiacha has quit [Quit: Client closed]
softworkz_ has joined #ffmpeg
softworkz is now known as Guest4361
softworkz_ is now known as softworkz
Guest4361 has quit [Read error: Connection reset by peer]
softworkz has quit [Quit: Leaving]
lemourin has joined #ffmpeg
down200- has quit [Ping timeout: 260 seconds]
jmcantrell has joined #ffmpeg
down200 has joined #ffmpeg
<MarioMey> Can you give me a command example to split a file into 8 files, using segment muxer?
<snoriman> I'm building ffpmpeg with support for x264 but it only works when I add "--enable-gpl --enable-libx264 --enable-encoder=libx264". The docs say it should be enough to only add "--enable-gpl --enable-x264" (using n7.1). Could this be a documentation issue, or rather something with my build?
softworkz has joined #ffmpeg
<furq> MarioMey: dur=$(ffprobe -v error -show_entries format=duration -of csv=nk=1:p=0 foo.mp4); ffmpeg -i foo.mkv -c copy -f segment -segment_time $((dur/8)) out%d.mkv
<furq> snoriman: do you have any --disable options set
jmcantrell has quit [Ping timeout: 272 seconds]
<snoriman> yes
<furq> that's probably why then
<snoriman> ah ok, thanks
<furq> it's generally better to not touch --disable stuff until it's time to actually do a release build (if that's the intention)
softworkz has quit [Quit: Leaving]
<furq> ffmpeg with just x264 will build pretty fast anyway if that was the idea
<furq> certainly a lot faster than finding out your build is broken 13 times
<snoriman> Ah ok, good to know. I was disabling everything by default and only enable what I need
<snoriman> haha
<snoriman> true
lemourin0 has joined #ffmpeg
lemourin is now known as Guest4525
Guest4525 has quit [Killed (zinc.libera.chat (Nickname regained by services))]
lemourin0 is now known as lemourin
<MarioMey> furq: I'm getting a bash error:
<MarioMey> bash: 181.006000: error sintáctico: operador aritmético inválido (el elemento de error es ".006000")
<MarioMey> My OS is in spanish... maybe it's waiting for a "," instead of "."?
<MarioMey> It's weird...
<johnjaye> i would imagine numbers still have to be decimal point
<furq> yeah i forgot that doesn't even work with floats as inputs
<furq> i thought it would just do an integer division
<furq> $((${dur%.*} / 8))
<furq> or you can go to bc if you really need millisecond precision
<furq> but it seems like a waste of time here
<MarioMey> Yes, it made 8 files (and a little one, total: 9). But there is a problem that I had a couple of minutes before asking you an example.
<MarioMey> Every file but the first seems that has no first keyframe.
<MarioMey> All of them starts with almost green screen and it starts moving as glitched.
<furq> i guess the input is open gop
<furq> that's pretty annoying
<MarioMey> Well, I checked and no... all files are bad.
<MarioMey> There is no keyframe... and no correct frame at all.
<MarioMey> They have audio, but video is not there.
<MarioMey> As I said, almost all green, a gray box and a little movement of those.
<MarioMey> Input is a mp4 file.
<MarioMey> h264.
<johnjaye> furq: bash and its arithmetic have been a disaster for humanity
vulpine has quit [Ping timeout: 245 seconds]
avidseeker has quit [Ping timeout: 248 seconds]
lemourin2 has joined #ffmpeg
lemourin is now known as Guest3695
Guest3695 has quit [Killed (lead.libera.chat (Nickname regained by services))]
lemourin2 is now known as lemourin
<snoriman> Is there an example that shows how to encode a "raw" image buffer? (rgb, yuv420p). I've been looking at the `encode-video.c` example, however that allocates a buffer. I already have a buffer that I want to feed into the encoder.
<BtbN> put the buffer into a frame if it has a valid layout for an existing pixel format
<snoriman> thanks, I just realized that I probably want to use a converter, to convert from rgb into nv12 or similar
<snoriman> and then I can directly feed the frame which the converter (swscale) gives me, right?
<BtbN> FFmpeg can absolutely work with an RGB frame
<BtbN> via swscale directly of libavfilter
<snoriman> yes, exactly
xx has quit [Ping timeout: 244 seconds]
<snoriman> when I want to encode using x264, is nv12 the preferred pixel format?
<snoriman> or was it yuv420p
<BtbN> I don't think it takes anything but yuv420p?
<snoriman> libx264.c seems list a bit more (nv12, nv16, nv21, etc)
<snoriman> but yuv420p is the first entry in the list :)
<BtbN> I'd expect conversion to 420p to be a bit faster, since no interleaving
<snoriman> Ok
vulpine has joined #ffmpeg
Shuriko has joined #ffmpeg
Shine_ has quit [Read error: Connection reset by peer]
<snoriman> :) ofc, I still need to wrap my RGB buffer into a AVFrame before I can pass it into `sws_scale_frame()`. Is there any documentation/guide on how to prep frames for these kind of operations?
<BtbN> don't use the scale_frame function
<kepstin> i think x264 accepts several 4:2:0 formats, but will internally convert them to its preferred working format
<snoriman> BtbN: ok thanks, what should I use instead? (but now I'm curious why I shouldn't use `scale_frame()`
Blacker47 has quit [Quit: Life is short. Get a V.90 modem fast!]
<snoriman> kepstin: ok I see
<BtbN> The function that takes buffers instead of a frame
<BtbN> the frame one is just a convenience function in case you already got a frame
user_oreloznog has quit [Quit: https://quassel-irc.org]
user_oreloznog has joined #ffmpeg
<snoriman> ok, then I probably need `sws_scale()` ; it would be nice then (I think) if I could allocate a AVFrame for the required width/height/pixfmt and use that as output for `sws_scale()`.
<snoriman> Can I use `av_frame_get_buffer()` for that?
jmcantrell has joined #ffmpeg
blb has quit [Remote host closed the connection]
blb has joined #ffmpeg
HarshK23 has quit [Quit: Connection closed for inactivity]
danielxhogan has joined #ffmpeg
danielxhogan is now known as dxh
kingdomofheaven has quit [Ping timeout: 276 seconds]
kingdomofheaven has joined #ffmpeg
softworkz has joined #ffmpeg
relue has quit [Read error: Connection reset by peer]
<BtbN> to fill a new frame for sws to write into, yeah
<snoriman> ok nice! got it working
user_oreloznog has quit [Quit: https://quassel-irc.org]
jmcantrell has quit [Ping timeout: 272 seconds]
dxh has quit [Read error: Connection reset by peer]
danielxhogan has joined #ffmpeg
danielxhogan has quit [Remote host closed the connection]
danielxhogan has joined #ffmpeg
danielxhogan has quit [Ping timeout: 268 seconds]
danielxhogan has joined #ffmpeg
softworkz has quit [Quit: Leaving]
danielxhogan has quit [Read error: Connection reset by peer]
danielxhogan has joined #ffmpeg
jmcantrell has joined #ffmpeg
<johnjaye> i'm surprised 4:2:0 is so common. i would have guessed 4:2:2
lavaball has quit [Remote host closed the connection]
delthas has quit [Ping timeout: 265 seconds]
delthas_ has joined #ffmpeg
<BtbN> 4:2:2 is a weird half-thing
<johnjaye> i guess yuv420p being the default should have clued me in
<furq> like a lot of video things it's because of inertia from the 1950s
<furq> but also because you'd rarely notice any difference
<BtbN> well, it still saves bandwidth today, and matches how the eye perceives color
<furq> yeah it's not obviously stupid in the way that ntsc framerates are
<another|> tbf, they were a clever hack back in the 60s
<another|> but not today anymore
<furq> and like all clever hacks they linger around until they become a disgusting stain
<another|> yep
<kepstin> 4:2:2 is mostly useful if you have to deal with interlaced video; dealing with vertically subsampled interlaced chroma is a pain :(
<kepstin> the fun one is 4:1:1 which is what you get when you want the same bandwidth as 4:2:0 but don't want vertically subsampled chroma - so you get full vertical resolution and 1/4 horizontal resolution :(
YUiNA has quit [Remote host closed the connection]
danielxhogan has quit [Read error: Connection reset by peer]
danielxhogan has joined #ffmpeg
<kepstin> (with interlaced video being... an inertia thing from the 1930s that has vastly outlived its usefulness)
<another|> yep
microchip_ has quit [Ping timeout: 248 seconds]
halvut has joined #ffmpeg
halvut has quit [Client Quit]
halvut has joined #ffmpeg
<BtbN> German public broadcast is still 1080i50 to this day
<BtbN> or 720p50
<furq> it's all 1080i50 here too
<furq> or 576i50 for the filler channels
microchip_ has joined #ffmpeg
<johnjaye> interlaced broadcasting still exists?
<furq> it's still the most widely used across the world i think
<furq> it takes less bandwidth and every tv and/or stb has builtin deinterlacing
<furq> it only becomes a problem when the minimum wage intern or less than minimum wage subcontractor who uploads clips to youtube doesn't know what interlacing is
<BtbN> It's also still extremely common to see heavy combing artifacts in youtube videos showing TV content
halvut has quit [Quit: Leaving...]
<furq> and also for official vod services to have stuff that was obviously shot in 1080p25 (or 24) only available in frame doubled 1080p50
<furq> but then also somehow stuff that was shot in 1080i50 is like that sometimes
<furq> so they've single rate deinterlaced it and then frame doubled it
<furq> and there is no 1080p25 stream there
<furq> thinking in particular of one state broadcaster here