Hi wesk05
This is how we treat hevc/hdr.
1) Demux into packets by ffmpeg. This can via two paths, One for xxx.mov, the other for the rest. I bring this up because the two paths are treated slightly different inside. So far, I've not see any difference. The other potential item is ffmpeg packages sps, pps and vps into extra data which we feed into the decoder. If ffmpeg is not packaging sps, pps or vps for hdr fully and has missed propagating some fields, then this might be the point of error.
2) Decode by AVFoundation. We use something called CDVDVideoCodecAVFoundation, Kodi has no such decoder but Memphiz's YAB implementation does as he backported it from us. CDVDVideoCodecAVFoundation uses AVSampleBufferDisplayLayer for the decode. It is the other way to do hw decode on iOS/tvOS. CDVDVideoCodecAVFoundation is a BYPASS codec an works like a MediaCodec(Surface) on Android. We make the GLES layer transparent and add a layer under it for video. AVSampleBufferDisplayLayer take care of all the details regarding this layer and iOS/tvOS will blend everything correctly for display. Basically, we set up using sps, pps, vps passed up by demuxer in extradata. Then it gets opened and feed demux packets and the video magically appears on the layer. It bypasses the normal GLES renderer. Generally, ffmpeg demuxer will pass up all demuxed packets but it is possible that it eats something that it thinks is bogus. This is another potential place for error.
For SRD/HDR switch, We track some details that indicate SDR/HDR/Dolby Vision. When playback starts, we use the Apple API's to switch to the desired frame rate and range. This is where things get slightly interesting. SDR range accepts values of 0 or 1, HDR 2 or 3 and Dolby Vision is 4. Experimentally determined. But using a test app, and loading test files into an assets, then checking the range reported by the asset, SDR gives 1, HDR gives 3 and Dolby Vision gives 4. So inquiring minds ask, what are 0 and 2 ? These values are not documented yet.
That was long answer, short answer to 'HDR10 metadata' is if ffmpeg is handling it properly, we just pass everything to AVSampleBufferDisplayLayer and the rest is out of our hands.
What I have to dig into is where in demuxer are display luminance values kept. I suspect sps/pps/vps. Once I find where, then I can check ffmpeg to made sure that it is populating them from parsed values and not using some default.
EDIT: As I don't have the equipment that you have to see exactly what is display luminance values are going out to the display. Is there some test file that I can run where I can see the error ? Like some gray ramp that should show a but shows b ? For example, that sony test file. As I don't know what it should look like, it's rather useless to me as it looks fine.