HDR has long being heralded as a highly compelling and effective technology as high dynamic range can improve video of any resolution and much better mimics the natural world. HDR continues its relatively slow growth into real-world use, but continues to show progress.
HDR is so compelling because it can feed our senses more light and it’s no secret that TV shops know we like nice, bright pictures on our TV sets. But the reality of production in HDR is that you have to contend with human eyes which have a great ability to see dark and bright images – but not at the same time. The total ability of the eye to simultaneously distinguish brightness is about 12 stops, which is only two thirds of its non-simultaneous total range.
The fact that our eyes constantly adapt and, let’s face it, interpret what they see, makes understanding brightness in videos tricky. There are dependencies on overall brightness of a picture at any one moment, the previous recent brightness, the brightness of local adjacent parts of the image, the ambient background and much more to consider.
Selios Ploumis steps into this world of varying brightness to creat a ways of quantitatively evaluating brightness for HDR. The starting place is the Average Picture Level (APL) which is what the SDR world uses to indicate brightness. With the greater dynamic range in HDR and the way this is implemented, it’s not clear that APL is up to the job.
Stelios explains his work in analysing APL in SDR and HDR and shows the times that simply taking the average of a picture can trick you into seeing two images as practically the same, whereas the brain clearly sees one as more ‘bright’ than the other. On the same track, he also explains ways in which we can work to differentiate signals better, for instance taking in to account the spread of the brightness values as opposed to APL’s normalised average of all pixels’ values.
The talk wraps up with a description of how the testing was carried out and a summary of the proposals to improve the quantitive analysis of HDR video.
Microservices split large applications into many small, simple, autonomous sections. This can be a boon, but this simplicity hides complexity. Chris Lennon looks at both sides to find the true value in microservices.
By splitting up a program/service into many small blocks, each of those blocks become simpler so testing each block becomes simpler. Updating one block hardly affects the system as a whole leading to quicker and more agile development and deployment. In fact, using microservices has many success stories attributed to it. Less vocal are those who have failures or increased operational problems due to their use.
Like any technology, there are ‘right’ and ‘wrong’ times and places to deploy it. Chris, from MediAnswers, explains where he sees the break-even line between non-deploying and deploying microservices and explains his reasons which include hidden comlexity, your teams’ ability to deal with these many services and covers some of the fallacies at play which tend to act against you.
A group has started up within SMPTE who want to reduce the friction in implementing microservices which include general interoperability and also interoperability across OSes. This should reduce the work needed to get microservices from different vendors working together as one.
Chris explains the work to date and the plans for the future for this working group.
Securing streams is a cat-and-mouse game and this is the latest move to keep content secure.
In order to maximise returns on content, the right to show the content is usually limited to certain geographies and sometimes streaming rights are sold separate to broadcast rights. This means it’s common to geo-lock streaming services whereby each IP address requesting content is checked against a database to see in which country that computer is located. This system isn’t perfect, but it tends to work fairly well.
The key, then, for people wanting to access content from outside the geography is to use someone else’s IP. You can do this by renting time on a computer such as in AWS, Digital Ocean or other similar providers where you can select in which country the computer you are using is located. However the IP addresses owned by these providers are also in the databases and are often blocked from access.
The determined viewer, therefore, needs a VPN which uses residential addresses from within that location. The OTT providers can’t block legitimate IP addresses, therefore access will be given. Access is provided to these addresses by VPNs which offer free use of their service in exchange for them being able to route traffic via your computer.
Detecting this kind of use is difficult, and is what Artem Lalaiants discusses in this talk from SMPTE 2018.
“I’m lazy and I’m a master procrastinator.” If you sympathise, learn how to automate network configuration with some code and spreadsheets.
In this video, the EBU’s Ievgen Kostiukevych presents a simple way to automate basic operations on Arista switches working in a SMPTE ST 2110 environment. This is done with a Python script which retrieves parameters stored in Google Sheets and uses Arista’s eAPI to implement changes to the switch.
The Python script was created as a proof of concept for the EBU’s test lab where frequent changes of VLAN configuration on the switches were required. Google Sheets has been selected as a collaborative tool which allows multiple people to modify settings and keep track of changes at the same time. This approach makes repetitive tasks like adding or changing descriptions of the ports easier as well.
Functionality currently supported:
Creating VLANs and modyfying their descriptions based on the date in a Google Sheets
Changing access VLANs and interface descriptions for the ports based on the date in a Google Sheets
Reading interfaces status and the mac address table from the switch and writing the data to the spreadsheet