Video: UHD – commercial success or work in progress?

Where is UHD? Whilst the move to HD for US primetime slots happened very quickly, HD had actually taken many years to gain a hold on the market. Now, though SD services are still numerous, top tier channels all target HD and in terms of production, SD doesn’t really exist. Is UHD successfully building the momentum needed to dominate the market in the way that HD does or are there blockers? Is there the will but not the bandwidth? Can we show that UHD makes financial sense for a business? This video from the DVB Project and UltraHD Forum answers these questions.

Ian Nock takes the mic first and explains the UltraHD Forum’s role in the industry ahead of introducing Dolby’s Jason Power. Ian explains that the UltraHD Forum isn open organisation focused on all aspects of Ultra High Definition including HDR, Wide Colour Gamut (WCG), Next Generation Audio (NGA) and High Frame Rate (HFR). Jason Power is the chair of the DVB Commercial Module AVC. See starts by underlining the UHD-1 Phase 1 and Phase 2 specifications. Phase 1 defines the higher resolution and colour gamut, but phase 2 delivers higher frame rate, better audio and HDR. DVB works to produce standards that define how these can be used and the majority of UHD services available are DVB compliant.

On the topic of available services, Ben Schwarz takes the stand next to introduce the UltraHD Forum’s ‘Service Tracker‘ which tracks the UHD services available to the public around the world. Ben underlines there’s been a tripling of services available between 2018 to 2020. It allows you to order by country, look at resolution (from 2K to 8L) and more. Ben gives a demo and explains the future plans.

Paul Bray focusses on the global television set business. He starts looking at how the US and Europe have caught up with China in terms of shipments but the trend of buying a TV set – on average – an inch larger than the year before, shows little sign of abating. A positive for the industry, in light of Covid-19, is that the market is not predicted to shrink. Rather, the growth that was expected will be stunted. The US replaces TVs more often than other countries, so the share of TVs there which are UHD is higher than anywhere else. Europe still has a large proportion of people who are happy with 32″ TVs due to the size and HD is perfectly ok for them. Paul shows a great graph which shows the UHD Penetration of each market against the number of UHD services available. We see that Europe is notably in the lead and that China barely has any UHD services at all. Though it should be noted that Omdia are counting linear services only.

Graph showing UHD Penetration per geographical market Vs. Number of Linear UHD services in that Market

Graph showing UHD Penetration per geographical market Vs. Number of Linear UHD services.
Graph and Information ©Omdia

The next part of the video is a 40-minute Q&A which includes Virginie Drugeon who explains her work in defining the dynamic metadata that is sent to the receiver so that it can correctly adapt the picture, particularly for HDR, to the display itself. The Q&A covers the impacts of Covid-19, recording formats for delivery to broadcasters, bitrates on satellite, the UltraHD Forum’s foundational guidelines, new codecs within DVB, high frame rate content and many other topics.

Watch now!
Download the presentations
Speakers

Jason Power Jason Power
Chair of the DVB Commercial Module AVC Working Group
Commercial Partnerships and Standards, Dolby Laboratories
Ben Schwarz Ben Schwarz
Chair of Ultra HD Forum Communication Working Group
Paul Gray Paul Gray
Research Director,
Omdia
Virginie Drugeon Virginie Drugeon
Senior Engineer, Digital Standardisation,
Panasonic
Ian Nock Moderator:Ian Nock
Chair of the Interoperability Working Group of the Ultra HD Forum
Principal Consultant & Founder, Fairmile West

On Demand Webinar: The Technology of Motion-Image Acquisition

A lot of emphasis is put on the tech specs of cameras, but this misses a lot of what makes motion-image acquisition an art form as much as it is a science. To understand the physics of lenses, it’s vital we also understand the psychology of perception. And to understand what ‘4K’ really means, we need to understand how the camera records the light and how it stores the data. Getting a grip on these core concepts allow us to navigate a world of mixed messages where every camera manufacturer from webcam to phone, from DSLR to Cinema is vying for our attention.

In the first of four webinars produced in conjunction with SMPTE, Russell Trafford-Jones from The Broadcast Knowledge welcomes SMPTE fellows Mark Schubin and Larry Thorpe to explain these fundamentals providing a great intro for those new to the topic, and filling in some blanks for those who have heard it before!

Russell will start by introducing the topic and exploring what makes some cameras suitable for some types of shooting, say, live television and others for cinema. He’ll talk about the place for smartphones and DSLRs in our video-everywhere culture. Then he’ll examine the workflows needed for different genres which drive the definitions of these cameras and lenses; If your live TV show is going to be seen 2 seconds later by 3 million viewers, this is going to determine many features of your camera that digital cinema doesn’t have to deal with and vice versa.

Mark Schubin will be talking about at lighting, optical filtering, sensor sizes and lens mounts. Mark spends some time explaining how light is made up and created whereby the ‘white’ that we see may be made of thousands of wavelengths of light, or just a few. So, the type of light can be important for lighting a scene and knowing about it, important for deciding on your equipment. The sensors, then, are going to receive this light, are also well worth understanding. It’s well known that there are red-, green- and blue-sensitive pixels, but less well-known is that there is a microlens in front of each one. Granted it’s pricey, but the lens we think most about is one among several million. Mark explains why these microlenses are there and the benefits they bring.

Larry Thorpe, from Canon, will take on the topic of lenses starting from the basics of what we’re trying to achieve with a lens working up to explaining why we need so many pieces of glass to make one. He’ll examine the important aspects of the lens which determine its speed and focal length. Prime and zoom are important types of lens to understand as they both represent a compromise. Furthermore, we see that zoom lenses take careful design to ensure that the focus is maintained throughout the zoom range, also known as tracking.

Larry will also examine the outputs of the cameras, the most obvious being the SDI out of the CCU of broadcast cameras and the raw output from cinema cameras. For film use, maintaining quality is usually paramount so, where possible, nothing is discarded hence creating ‘raw’ files which are named as they record, as close as practical, the actual sensor data received. The broadcast equivalent is predominantly RGB with 4:2:2 colour subsampling meaning the sensor data has been interpreted and processed to create RGB pixels and half the colour information has been discarded. This still looks great for many uses, but when you want to put your image through a meticulous post-production process, you need the complete picture.

The SMPTE Core Concepts series of webcasts are both free to all and aim to support individuals to deepen their knowledge. This webinar is in collaboration with The Broadcast Knowledge which, by talking about a new video or webinar every day helps empower each person in the industry by offering a single place to find educational material.

Watch now!
Speakers

Mark Schubin Mark Schubin
Engineer and Explainer
Larry Thorpe Larry Thorpe
Senior Fellow,
Canon U.S.A., Inc.
Russell Trafford-Jones Russell Trafford-Jones
Editor, The Broadcast Knowledge
Manager, Services & Support, Techex
Exec Member, IET Media

Video: Extension to 4K resolution of a Parametric Model for Perceptual Video Quality

Measuring video quality automatically is invaluable and, for many uses, essential. But as video evolves with higher frame rates, HDR, a wider colour gamut (WCG) and higher resolutions, we need to make sure the automatic evaluations evolve too. Called ‘Objective Metrics’, these computer-based assessments go by the name of PSNR, DMOS, VMAF and others. One use for these metrics is to automatically analyse an encoded video to determine if it looks good enough and should be re-encoded. This allows for the bitrate to be optimised for quality. Rafael Sotelo, from the Universidad de Montevideo, explains how his university helped work on an update to Predicted MOS to do just this.

MOS is the Mean Opinion Score and is a result derived from a group of people watching some content in a controlled environment. They vote to say how they feel about the content and the data, when combined gives an indication of the quality of the video. The trick is to enable a computer to predict what people will say. Rafael explains how this is done looking at some of the maths behind the predicted score.

In order to test any ‘upgrades’ to the objective metric, you need to test it against people’s actual score. So Rafael explains how he set up his viewing environments in both Uruguay and Italy to be compliant with BT.500. BT.500 is a standard which explains how a room should be in order to have viewing conditions which maximise the ability of the viewers to appreciate the pros and cons of the content. For instance, it explains how dim the room should be, how reflective the screens and how they should be calibrated. The guidelines don’t apply to HDR, 4K etc. so the team devised an extension to the standard in order to carryout the testing. This is called ‘subjective testing’.

With all of this work done, Rafael shows us the benefits of using this extended metric and the results achieved.

Watch now!
Speakers

Rafael Sotelo Rafael Sotelo
Director, ICT Department
Universidad de Montevideo

Webinar: HDR Dynamic Mapping

HDR broadcast is on the rise, as we saw from the increased number of ways to watch this week’s Super Bowl in HDR, but SDR will be with us for a long time. Not only will services have to move seamlessly between SDR and HDR services, but there is a technique that allows HDR itself to be dynamically adjusted to better match the display its on.

Introduced in July 2019, content can now be more accurately represented on any specific display, particularly lower end TVs. Dynamic Mapping (DM), is applies to PQ-10 which is the 10-bit version of Dolby’s Perceptual Quantizer HDR format standardised under SMPTE ST-2084. Because HLG (ARIB STV-B67) works differently, it doesn’t need dynamic mapping. Dynamic Metadata to support this function is defined as SMPTE ST 2094-10, -40 and also as part of ETSI TS 103 433-2.

Stitching all of this together and helping us navigate delivering the best HDR is Dolby’s Jason Power and Virginie Drugeon from Panasonic in this webinar organised by DVB.

Register now!
Speakers

Virginie Drugeon Virginie Drugeon
Senior Engineer for Digital TV Standardisation, Panasonic
Chair, DVB TM-AVC Group
Jason Power Jason Power
Senior Director, Commercial Partnerships and Standards, Dolby Laboratories
Chair, DVB CM-AVC Group