michaelni changed the topic of #ffmpeg-devel to: Welcome to the FFmpeg development channel | Questions about using FFmpeg or developing with libav* libs should be asked in #ffmpeg | This channel is publicly logged | FFmpeg 8.0 has been released! | Please read ffmpeg.org/developer.html#Code-of-conduct
odrling has quit [Remote host closed the connection]
<haasn>
Lynne: it turns out this problem is incredibly annoying and I'm not sure that anything makes sense
<haasn>
as always when it comes to color stuff
<haasn>
it seems to be that we should, like, be linearly interpolating between the planckian locus and the daylight locus based on whether the kelvin numbers fall within the standard D series illuminant range or no
<haasn>
because what does it even mean to transpose a scene from 6500K to 1700K
<haasn>
you are trying to go from a scene lit in daylight to one lit by a candle
<haasn>
or other incandescent source, which would be following a blackbody spectrum not a daylight spectrum
<Lynne>
wbs: awesome
<Lynne>
haasn: when in doubt, copy what ACES does
<haasn>
Lynne: do you know what they do?
fjlogger has quit [Remote host closed the connection]
<haasn>
they actually default to the same curve as the one we use in libplacebo
<Lynne>
I could try to find a candle and shoot some footage, but I think it'll be fine to adjust it later
<Lynne>
could you also consider stealing the vf_scale_vulkan debayer code and adding support for it in libplacebo?
<haasn>
what do you even need it for?
<Lynne>
I want to be able to quickly view camera footage with mpv, and possibly use libplacebo to convert it into linear XYZ ready for editing
<haasn>
I meant the 1800k color temperature adaptation
<Lynne>
in case one day I'd like to shoot under candle light
<Lynne>
or a late sunset
englishm has quit [Ping timeout: 265 seconds]
englishm has joined #ffmpeg-devel
psykose has quit [Remote host closed the connection]
Teukka has quit [Quit: Not to know is bad; not to wish to know is worse. -- African Proverb]
Teukka has joined #ffmpeg-devel
Teukka has quit [Changing host]
Teukka has joined #ffmpeg-devel
<kepstin>
in most cases people use in-camera colour balance controls for that, but I suppose it might make sense to leave a camera on fixed D65 mode and do correction in post in some cases...
<Lynne>
raw has no colorspace or white balance
<kepstin>
you must have _some_ indication of the native white balance of the camera when used in raw mode, somehow...
<kepstin>
otherwise raw processing wouldn't be able to correctly adapt it at all
<Lynne>
the camera does really not much to the signal - sample each rggb pixel, convert the 15-bit signal into some log gamma, cram that in 12-bit prores raw, and writes it
<Lynne>
you're supposed to know what the white balance is - whether its some object in the scene that's white, or a color table sheet or a clapper board
<kepstin>
at the very least, most cameras usually know which raw values correspond to "white" at various colour temperatures for when they're capturing in non-raw mode.
<Lynne>
white balance detection is disabled when recording raw on mine
<Lynne>
but yeah, on non-raw, the data's in a big XML sheet just dumped in the metadata
<Lynne>
along with aperture, iso, focal length and focus
<kepstin>
going from raw where you have a sample value which you want to say is "neutral" to, say, D65 is very different from "I filmed this using D65 mode on my camera but it was actually candlelight and I want to correct the color balance"
<Lynne>
yup
<kepstin>
for the latter case, gimp provides a ui with two sliders; you set the first one to the color temperature that the camera was set to, and the second to the color temperature of the lights that were being used.
<kepstin>
alternately, to get an artistic effect on an image that's already white balanced, you can set the second slider to the output color temp, and first slider to the color temp you want to emulate.
<kepstin>
regarding the math for actually doing the conversion, are you using the bradford color adaptation transform or something similar?
<kepstin>
if you know how to go from raw values to cie 1931 xyz, then the same math can be used to convert from a sampled white balance checker in raw to an arbitrary different white point too.
<Lynne>
what about the former case?
<kepstin>
if you don't know what color the sampled value in the raw actually is as absolute XYZ, then you can't do a perceptual color adaptation transform. best you could do is linear scaling, i think :/
<kepstin>
the camera probably does know this information from characterization of the sensor stored in its firmware :/
<kepstin>
as for ui, i'm mostly used to still photography raw, and on cameras i've used, darktable provides a single selector to let you indicate the colour temp of the lighting. For this to work, it has to know the native white balance of the sensor from... somewhere. (it also supports profiling the camera using multiple colours from a color card)
<Lynne>
oh, the manufacturer does provide .cube files
<Lynne>
though I'm pretty sure you don't need to use them - basically, there's a cube file to convert external RAW footage (e.g. via a RAW-over-HDMI recorder) into what the camera natively records as raw (prores raw, e.g. V-log), and a cube LUT to go from RAW to BT709
<Lynne>
which implies that the camera already converts everything into a common V-log representation, which can be graded as if another V-log camera recorded it
<Lynne>
and for some reason it can't do that via RAW-over-HDMI, presumably due to bandwidth limitations, so you need a LUT to convert that recorded stream into something that can be graded as if it was natively recorded
<Lynne>
though their cube file to convert v-log into bt709 looks wrong, the image is way too green, so I'm wondering what's going on