Video: Content Production Technology on Hybrid Log-Gamma

‘Better Pixels’ is the continuing refrain from the large number of people who are dissatisfied with simply increasing resolution to 4K or even 8K. Why can’t we have a higher frame-rate instead? Why not give us a wider colour gamut (WCG)? And why not give us a higher dynamic range (HDR)? Often, they would prefer any of these 3 options over higher resolution.

Watch this video explain more, now.

Dynamic Range is the word given to describe how much of a difference there is between the smallest possible signal and the strongest possible signal. In audio, what’s the quietest things that can be heard verses the loudest thing that can be heard (without distortion). In video, what’s the difference between black and white – after all, can your TV fully simulate the brightness and power of our sun? No, what about your car’s headlights? Probably not. Can your TV go as bright as your phone flashlight – well, now that’s realistic.

So let’s say your TV can go from a very dark black to being as bright as a medium-power flashlight, what about the video that you send your TV? When there’s a white frame, do you want your TV blasting as bright as it can? HDR allows producers to control the brightness of your display device so that something that is genuinely very bright, like star, a bright light, an explosion can be represented very brightly, whereas something which is simply white, can have the right colour, but also be medium brightness. With video which is Standard Dynamic Range (SDR), there isn’t this level of control.

For films, HDR is extremely useful, but for sports too – who’s not seen a football game where the sun leaves half the pitch in shadow and half in bright sunlight? With SDR, there’s no choice but to have one half either very dark or very bright (mostly white) so you can’t actually see the game there. HDR enabled the production crew to let HDR TVs show detail in both areas of the pitch.

HLG, which stands for Hybrid Log-Gamma is the name of a way of delivering HDR video. It’s been pioneered, famously, by Japan’s NHK with the UK’s BBC and has been standardised as ARIB STV-B67. In this talk, NHK’s Yuji Nagata helps us navigate working with multiple formats; HDR HLG -> SDR, plus converting from HLG to Dolby’s HDR format called PQ.

The reality of broadcasting is that anyone who is producing a programme in HDR will have to create an SDR version at some point. The question is how to do that and when. For live, some broadcasters may need to fully automate this. In this talk, we look at a semi-automated way of doing this.

HDR is usually delivered in a Wide Colour Gamut signal such as the ITU’s BT.2020. Converting between this colour space and the more common BT.709 colour space which is part of the HD video standards, is also needed on top of the dynamic range conversion. So listen to Yugi Nagata’s talk to find out NHK’s approach to this.

NHK has pushed very hard for many years to make 8K broadcasts feasible and has in recent times focussed on tooling up in time for the the 2020 Olympics. This talk was given at the SMPTE 2017 technical conference, but is all the more relevant now as NHK up the number of 8K broadcasts approaching the opening ceremony. This work on HDR and WCG is part of making sure that the 8K format really delivers an impressive and immersive experience for those that are lucky enough to experience it. This work on the video goes hand in hand with their tireless work with audio which can deliver 22.2 multichannel surround.

Watch now!


Yuji Nagata Yuji Nagata
Broadcast Engineer,

Webinar: Next Generation Audio & DVB

Webinar Date: 18th March 2019
Time: 14:00 GMT / 15:00 CET

Object oriented audio is a relatively new audio technique which doesn’t simply send audio as one track or two, but it sends individual audio objects – simplistically we can think of these as audio samples – which also come with some position information.

With non-object-orientated audio, there is very little a speaker system can do to adjust the audio to match. It was either created for 8 speakers, 6, or 2 etc. So if you have a system that only has 4 speakers or they are in unusual places, it’s a compromise to it sound right.

Object oriented audio sends the position information for some of the audio which means that the decoder can work out how much of the sound to put in each speaker to best represent that sound for whatever room and speaker set-up it has.

AC-4 from Dolby is one technology which allows objects to be sent with the audio. It still supports conventional 5.1 style sound but can also contain up to 7 audio objects. AC-4 is one NGA technology adopted by DVB for DASH.

In this webinar, Simon Tuff from the BBC discusses what the Audio Video Coding (AVC) experts of DVB have been working on to introduce Next Generation Audio (NGA) to the DVB specifications over recent years. With the latest version of TS 101 154, DVB’s guidelines for the use of video and audio coding in broadcast and broadband applications, being published by ETSI, it seems like a great time to unpack the audio part of the tool box and share the capabilities of NGA via a webinar.

No registration needed. Click here to watch on the day.


Simon Tuff Simon Tuff
Principal Technologist

Video: Visual Excellence in Production

In this Tech Talk we shall hear from researchers and vision scientists, how they are ensuring the precision of HDR and colour in image capture.

Today’s imaging technology strives to produce a viewing experience which is, as far as possible, identical with that perceived by the human visual system. Strangely, one limiting factor in high dynamic range (HDR) design has been that existing measurements of the human vision have not been sufficiently accurate. Another of these issues is skin tone: humans are particularly sensitive to skin colour – regarding it as an indicator of well-being. The accurate portrayal of this subtle parameter is therefore particularly important. A further interesting image quality issue is slow motion – here we explore the development of an 8K UHD 240fps camera and slow motion capture and replay server.

Watch now!


Lucien Lenzen Lucien Lenzen
Research Assistant
Hochschule RheinMain
Simon Thompson Simon Thompson
Project R&D Engineer
Patrick Morvan Patrick Morvan
Senior R&D Engineer
Simon Gauntlett Simon Gauntlett
Director of Imaging Standards and Technology
Dolby Laboratories