Video: UHD and HDR at the BBC – Where Are We Now, and Where Are We Going? –

Has UHD been slow to roll out? Not so, we hear in this talk which explains the work to date in standardising, testing and broadcasting in UHD by the BBC and associated organisations such as the EBU.

Simon Thompson from BBC R&D points out that HD took decades to translate from an IBC demo to an on-air service, whereas UHD channels surfaced only two years after the first IBC demonstration of UHD video. UHD has had a number of updates from the initial resolution focused definition which created UHD-1, 2160p lines high and UHD-2 which is often called 8K. Later, HDR with Wide Colour Gamut (WCG) was added which allowed the image to much better replicate the brightnesses the eye is used to and almost all of the naturally-occurring colours; it turns out that HD TV (using REC.709 colour) can not reproduce many colours commonly seen at football matches.

In fact, the design brief for HDR UHD was specifically to keep images looking natural which would allow better control over the artistic effect. In terms of HDR, the aim was to have a greater range than the human eye for any one adpation state. The human eye can see an incredible range of brightnesses, but it does this by adapting to different brightness levels – for instance by changing the pupil size. When in a fixed state the eye can only access a subset of sensitivity without further adapting. The aim of HDR is to have the eye in one adaptation state due to the ambient brightness, then allow the TV to show any brightness the eye can then hold.

Simon explains the two HDR formats: Dolby’s PQ widely adopted by the film industry and the Hybrid Log-Gamma format which is usually favoured by broadcasters who show live programming. PQ, we hear from Simon, covers the whole range of the human visual system meaning that any PQ stream has the capability to describe images from 1 to 10,000 Nits. In order to make this work properly, the mix needs to know the average brightness level of the video which will not be available until the end of the recording. It also requires sending metadata and is dependent on the ambient light levels in the room.

Hybrid Log-Gamma, by contrast, works on the fly. It doesn’t attempt to send the whole range of human eye and no metadata needed. This lends itself well to delivering HDR for live productions. To learn more about the details of PQ and HLG, check out this video.

Simon outlines the extensive testing and productions done in UHD and looks at the workflows possible. The trick has been finding the best way to produce both an SDR and an HDR production at the same time. The latest version that Simon highlights had all the 70 cameras being racked in HDR by people looking at the SDR down-mix version. The aim here is to ensure that the SDR version looks perfect, as it still serves over 90% of the viewership. However, the aim is to move to a 100% HDR production with SDR being derived off the back of that without any active monitoring. The video ends with a look to the challenges yet to be overcome in UHD and HDR production.

Watch now!
Speaker

Simon Thompson Simon Thompson
Senior R&D Engineer
BBC R&D

Video: Multicast ABR opens the door to a new DVB era

Multicast ABR (mABR) is a way of delivering standard HTTP-based streams like HLS and DASH over multicast. This can be done using an ISP’s managed network to multicast to thousands of homes and only within the home itself does the stream gets converted into unicast HTTP. This allows devices in the home to access streaming services in exactly the same way as they would Netflix or iPlayer, but avoiding strain on the core network. Streaming is a point-to-point service so each device takes its own stream. If you have 3 devices in the home watching a service, you’ll be sending 3 streams out to them. With mABR, the core network only ever sees one stream to the home and the linear scaling is done internally.

Guillaume Bichot from Broadpeak explains how this would work with a multicast server that picks up the streaming files from a CDN/the internet and converts it into multicast. This then needs a gateway at the other end to convert back into multicast. The gateway can run on a set-top-box in the home, as long as multicast can be carried over the last mile to the box. Alternatively, it can be upstream at a local headend or similar.

At the beginning of the talk, we hear from BBC R&D’s Richard Bradbury who explains the current state of the work. Published as DVB Bluebook A176, this is currently written to account for live streaming, but will be extended in the future to deal with video on demand. The gateway is able to respond with a standard HTTP redirect if it becomes overloaded which seamlessly pushes the player’s request directly to the relevant CDN endpoint.

DVB also outlines how players can contact the CDN for missing data or video streams that are not provided, currently, via the gateway. Guillaume outlines which parts of the ecosystem are specified and which are not. For instance, the function of the server is explained but not how it achieves this. He then shows where all this fits into the network stack and highlights that this is protocol-agnostic as far as delivery of media. Whilst they have used DVB-DASH as their assumed target, this could as easily work with HLS or other formats.

Guillaume finishes by showing deployment examples. We see that this can work with uni-directional satellite feeds with a return channel over the internet. It can also work with multiple gateways accessible to a single consumer.

The webinar ends with questions though, during the webinar, Richard Bradbury was answering questions on the chat. DVB has provided a transcript of these questions.

Watch now!
Download the slides from this presentation
Speakers

Richard Bradbury Richard Bradbury
Lead Research Engineer,
BBC R&D
Guillaume Bichot Guillaume Bichot
Principal Engineer, Head of Exploration,
Broadpeak

Video: Tech Talk: Production case studies – the gain after the pain

Technology has always been harnessed to improve, change and reinvent production. Automated cameras, LED walls, AR, LED lighting among many other technologies have all enabled productions to be done differently creating new styles and even types of programming.

In this Tech Talk from IBC 2019, we look at disruptive new technologies that change production, explained by the people who are implementing them and pushing the methods forward.

TV2 Norway’s Kjell Ove Skarsbø explains how they have developed a complete IP production flow and playout facility. This system allows them more flexibility and scalability. They did this by creating their own ESB (Enterprise Service Bus) to decouple the equipment from direct integrations. Working in an agile fashion, they delivered incremental improvements. This means that Provys, Mayam, Viz, Mediator amongst other equipment communicate with each other by delivering messages in to a system framework which passes messages on their behalf in a standard format.

Importantly, Kjell shares with us some mistakes that were made on the way. For instance, the difficulties of the size of the project, the importance of programmers understanding broadcast. “Make no compromise” is one of the lessons learnt which he discusses.

Olie Baumann from MediaKind presents live 360º video delivery, “Experiences that people have in VR embed themselves more like memories than experiences like television” he explains. Olie starts. by explaining the lay of the land in today’s VR equipment landscape then looking at some of the applications of 360º video such as looking around from an on-car camera in racing.

Olie talks us through a case study where he worked with Tiledmedia to deliver an 8K viewport which is delivered in full resolution only in the direction the 360º viewer and a lower resolution for the rest. When moving your head, the area in full resolution moves to match. We then look through the system diagram to understand which parts are in the cloud and what happens.

Matthew Brooks with Thomas Preece from BBC R&D explain their work in taking Object-based media from the research environment into mainstream production. This work allows productions to deliver object-based media meaning that the receiving device can display the objects in the best way for the display. In today’s world of second screens, screen sizes vary and small screens can benefit from larger, or less, text. It also allows for interactivity where programmes fork and can adapt to the viewers tastes, opinions and/or choices. Finally, they have delivered a tool to help productions manage this themselves and they can even make a linear version of the programme to maximise the value gained out of the time and effort spent in creating these unique productions.

Watch now!

Watch now!
Speakers

Kjell Ove Skarsbø Kjell Ove Skarsbø
Chief Technology Architect,
TV2 Norway
Olie Baumann Olie Baumann
Senior Technical Specialist,
MediaKind
Matthew Brooks Matthew Brooks
Lead Engineer,
BBC Research & Development
Thomas Preece Thomas Preece
Research Engineer,
BBC Research & Development
Stephan Heimbecher

Video: AMWA BCP 003 NMOS API Security

Building security into your infrastructure is more and more important for broadcasters with many now taking very seriously a topic which, only 6 years ago, was only just being discussed. Attacks on broadcasters like TV5 Monde have brought into focus that it’s not just copmanies who have high value rights who are ripe for breach – attacking a broadcaster is a high impact way of getting your message accross.

We have seen how the internet, which was built on very open and trusting protocols, has struggled in recent times to keep abuse to a minimum and to implement security to keep data safe and to keep out unauthorised persons.

And so AMWA is looking at its recent specifcations to ensure there is a clear and interoperable way of implementing security. The benefit of IP should be that that as an industry we can benefit from the work of other industries before us and here, having based these specifications on HTTP interfaces, we can do exactly that. Just like sites on the internet can implemnt HTTPS, we, too use the same mechanism of security certificates and TLS (colloquially known as SSL) encryption to ensure that not only is our data encrypted but also that no one can impersonate anyone else on the network.

Simon Rankine from BBC R&D explains the work he has been part of in defining this secure interface which not only protects from mal-intentioned actors, but also offers some protection from accidental mistakes by staff.

Simon gives a good intorduction to not only how this is a benefit but also how the underlying mechanisms work which are just as applicable to the NMOS APIs as they are to general websites.

Speaker

Simon Rankine
Simon Rankine
Project Research Engineer,
BBC R&D