Video: Quantitative Evaluation and Attribute of Overall Brightness in a HDR World

HDR has long being heralded as a highly compelling and effective technology as high dynamic range can improve video of any resolution and much better mimics the natural world. HDR continues its relatively slow growth into real-world use, but continues to show progress.

HDR is so compelling because it can feed our senses more light and it’s no secret that TV shops know we like nice, bright pictures on our TV sets. But the reality of production in HDR is that you have to contend with human eyes which have a great ability to see dark and bright images – but not at the same time. The total ability of the eye to simultaneously distinguish brightness is about 12 stops, which is only two thirds of its non-simultaneous total range.
 

 
The fact that our eyes constantly adapt and, let’s face it, interpret what they see, makes understanding brightness in videos tricky. There are dependencies on overall brightness of a picture at any one moment, the previous recent brightness, the brightness of local adjacent parts of the image, the ambient background and much more to consider.

Selios Ploumis steps into this world of varying brightness to creat a ways of quantitatively evaluating brightness for HDR. The starting place is the Average Picture Level (APL) which is what the SDR world uses to indicate brightness. With the greater dynamic range in HDR and the way this is implemented, it’s not clear that APL is up to the job.

Stelios explains his work in analysing APL in SDR and HDR and shows the times that simply taking the average of a picture can trick you into seeing two images as practically the same, whereas the brain clearly sees one as more ‘bright’ than the other. On the same track, he also explains ways in which we can work to differentiate signals better, for instance taking in to account the spread of the brightness values as opposed to APL’s normalised average of all pixels’ values.

The talk wraps up with a description of how the testing was carried out and a summary of the proposals to improve the quantitive analysis of HDR video.

Watch now!
Speakers

Stelios Ploumis Stelios Ploumis
PhD Research Candidate
MTT Innovation Inc.

Video: Microservices & Media: Are we there yet?

Microservices split large applications into many small, simple, autonomous sections. This can be a boon, but this simplicity hides complexity. Chris Lennon looks at both sides to find the true value in microservices.

By splitting up a program/service into many small blocks, each of those blocks become simpler so testing each block becomes simpler. Updating one block hardly affects the system as a whole leading to quicker and more agile development and deployment. In fact, using microservices has many success stories attributed to it. Less vocal are those who have failures or increased operational problems due to their use.

Like any technology, there are ‘right’ and ‘wrong’ times and places to deploy it. Chris, from MediAnswers, explains where he sees the break-even line between non-deploying and deploying microservices and explains his reasons which include hidden comlexity, your teams’ ability to deal with these many services and covers some of the fallacies at play which tend to act against you.

A group has started up within SMPTE who want to reduce the friction in implementing microservices which include general interoperability and also interoperability across OSes. This should reduce the work needed to get microservices from different vendors working together as one.

Chris explains the work to date and the plans for the future for this working group.

Watch now!
Speakers

Chris Lennon Chris Lennon
President & CEO,
MediAnswers

Video: Stopping Geolocation Fraud Via “Rented” Residential IPs to Protect Territorial Content

Securing streams is a cat-and-mouse game and this is the latest move to keep content secure.

In order to maximise returns on content, the right to show the content is usually limited to certain geographies and sometimes streaming rights are sold separate to broadcast rights. This means it’s common to geo-lock streaming services whereby each IP address requesting content is checked against a database to see in which country that computer is located. This system isn’t perfect, but it tends to work fairly well.

The key, then, for people wanting to access content from outside the geography is to use someone else’s IP. You can do this by renting time on a computer such as in AWS, Digital Ocean or other similar providers where you can select in which country the computer you are using is located. However the IP addresses owned by these providers are also in the databases and are often blocked from access.

The determined viewer, therefore, needs a VPN which uses residential addresses from within that location. The OTT providers can’t block legitimate IP addresses, therefore access will be given. Access is provided to these addresses by VPNs which offer free use of their service in exchange for them being able to route traffic via your computer.

Detecting this kind of use is difficult, and is what Artem Lalaiants discusses in this talk from SMPTE 2018.

Watch now!

Speaker

Artem Lalaiants Artem Lalaiants
Service Delivery Manager,
GeoGuard

Video: TR-1001 Replacing Video By Spreadsheet

Here to kill the idea of SDNs – Spreadsheet Defined Networks – is TR-1001 which defines ways to implement IP-based media facilities avoiding some typical mistakes and easing the support burden.

From the JT-NM (Joint Taskforce – Networked Media), TR-1001 promises to be a very useful document for companies implementing ST-2110 or any video-over-IP network Explaining what’s in it is EEG’s Bill McLaughlin at the VSF’s IP Showcase at NAB.

This isn’t the first time we’ve written about TR-1001 at The Broadcast Knowledge. Previously, Imagine’s John Mailhot has dived in deep as part of a SMPTE standards webcast. Here, Bill takes a lighter approach to get over the main aims of the document and adds details about recent testing which happened across several vendors.

Bill looks at the typical issues that people find when initially implementing a system with ST-2110 devices and summarises the ways in which TR-1001 mitigates these problems. The aim here is to enable, at least in theory, many nodes to be configured in an automatic and self-documenting way.

Bill explains that TR-1001 covers timing, discovery and connection of devices plus some of configuration and monitoring. As we would expect, ST-2110 itself defines the media transport and also some of the timing. Work is still to be done to help TR-1001 address security aspects.

Speaker

Bill McLaughlin Bill McLaughlin
VP Product Development,
EEG Enterprises