Video: Using AMWA IS-06 for Flow Control on Professional Media Networks

In IP networks multicast flow subscription is usually based on a combination of IGMP (Internet Group Management Protocol) and PIM (Protocol Independent Multicast) protocols. While PIM allows for very efficient delivery of IP multicast data, it doesn’t provide bandwidth control or device authorisation.

To solve these issues on SMPTE ST 2110 professional media networks the NMOS IS-06 specification has been developed. It relies on a Software-Defined Networking, where traffic management application embedded in each single switch or router is replaced by a centralised Network Controller. This controller manages and monitors the whole network environment, making it bandwidth aware.

NMOS IS-06 specification provides a vendor agnostic Northbound interface from Network Controller to Broadcast Controller. IS-06 in conjunction with IS-04 (Discovery and Registration) and IS-05 (NMOS Device Connection Management) allows Broadcast Controller to automatically set up media flows between endpoints on the network, reserve bandwidth for flows and enforce network security. Broadcast Controller is also able to request network topology information from Network Controller, which can be used to create a user friendly graphic representation of the flows in the network.

In this presentation Rob Porter from Sony Europe explains the basics of NMOS IS-06, showing in details how setting up media flows with this specification fits into the IS-04 / IS-05 workflow. Rob emphasise that all AMWA NMOS specifications are completely open and available to anyone, allowing for interoperability between broadcast and network devices from different manufacturers.

The next speaker, Sachin Vishwarupe from Cisco Systems, focuses on the future works on IS-06, including provisioning feedback (such as insufficient bandwidth, no route available from sender to receiver or no management connectivity), flow statistics, security and grouping (similar to ”salvo” in SDI world).

There is also a discussion on extension of IS-06 specification for Network Address Translation (NAT), which would help to resolve problems caused by address conflicts e.g. when sharing resources between facilities.

You can find the slides here.

Watch now!

Speakers

Rob Porter Rob Porter
Project Manager – Advanced Technology Team
Sony Europe
Sachin Vishwarupe
Principal Engineer
Cisco Systems

Video: Three Roads to Jerusalem

With his usual entertaining vigour, Will Law explains the differences to the three approaches to low-latency streaming: DASH, LHLS and LL-HLS from Apple. Likening them partly to religions that all get you to the same end, we see how they differ and some of the reasons for that.

Please note: Since this video was recorded, Apple has released a new draft of LL-HLS. As described in this great article from Mux, the update’s changes are

  • “Delivering shorter sub-segments of the video stream (Apple call these parts) more frequently (every 0.3 – 0.5s)
  • Using HTTP/2 PUSH to deliver these smaller parts, pushed in response to a blocking playlist request
  • Blocking playlist requests, eliminating the current speculative manifest request polling behaviour in HLS
  • Smaller, delta rendition playlists, which reduces playlist size, which is important since playlists are requested more frequently
  • Faster rendition switching, enabled by rendition reports, which allows clients to see what is happening in another playlist without requesting it in its entirety”[0]

Read the full article for the details and implications, some of which address some points made in the talk.

Furthermore, THEOplayer have released this talk explaining the changes and discussing implementation.

Anyone who saw last year’s Chunky Monkey video, will recognise Will’s near-Oscar-winning animation style as he sets the scene explaining the contenders to the low-latency streaming crown.

We then look at a bullet list of features across each of the three low latency technologies (note Apple’s recent update) which leads on to a discussion on chunked transfer delivery and the challenges of line-rate delivery. A simple view of the universe would say that the ideal way to have a live stream, encoded at a constant bitrate, would be to stream it constantly at that bitrate to the receiver. Whilst this is, indeed, the best way to go, when we stream we’re also keeping one eye on whether we need to change the bitrate. If we get more bandwidth available it might be best to upgrade to a better quality and if we suddenly have contested, slow wifi, it might be time for an emergency drop down to the lowest bitrate stream.

When you are delivered a stream as individual files, you can measure how long they take to download to estimate your available bandwidth. If a file can be downloaded at 1Gbps, then it should always arrive at 1Gbps. Therefore if it arrives at less than 1Gbps we know that there is a bandwidth restriction and can make adjustments. Will explains that for streams delivered with chunked transfer or in real time such as in LL-HLS, this estimation no longer works as the files simply are never available at 1Gbps. He then explains some of the work that has been undertaken to develop more nuanced ways of estimating available bandwidth. It’s well worth noting that the smaller the files you transfer, the less accurate the bandwidth estimation as TCP takes time to speed up to line rate so small 320ms-length video segments are not ideal for maximising throughput.

Continuing to look at the differences, we next look at request rates with DASH at 20 requests per second compared to LL-HLS at 720. This leads naturally to an analysis of the benefits of HTTP/2 PUSH technology used in LL-HLS and the savings that can offer. Will explores the implications, and some of the problems, with last year’s version of the LL-HLS spec, some of which have been mitigated since.

The talk concludes with some work Akamai has done to try and establish a single, common workflow with examples and a GitHub repository. Will shows how this works and the limitations of the approach and finishes with a look at the commonalities in approaches.

[0] From “Low Latency HLS 2: Judgment Day” https://mux.com/blog/low-latency-hls-part-2/

Watch now!
Speakers

Will Law Will Law
Chief Architect,
Akamai

Webinar: HDR Dynamic Mapping

HDR broadcast is on the rise, as we saw from the increased number of ways to watch this week’s Super Bowl in HDR, but SDR will be with us for a long time. Not only will services have to move seamlessly between SDR and HDR services, but there is a technique that allows HDR itself to be dynamically adjusted to better match the display its on.

Introduced in July 2019, content can now be more accurately represented on any specific display, particularly lower end TVs. Dynamic Mapping (DM), is applies to PQ-10 which is the 10-bit version of Dolby’s Perceptual Quantizer HDR format standardised under SMPTE ST-2084. Because HLG (ARIB STV-B67) works differently, it doesn’t need dynamic mapping. Dynamic Metadata to support this function is defined as SMPTE ST 2094-10, -40 and also as part of ETSI TS 103 433-2.

Stitching all of this together and helping us navigate delivering the best HDR is Dolby’s Jason Power and Virginie Drugeon from Panasonic in this webinar organised by DVB.

Register now!
Speakers

Virginie Drugeon Virginie Drugeon
Senior Engineer for Digital TV Standardisation, Panasonic
Chair, DVB TM-AVC Group
Jason Power Jason Power
Senior Director, Commercial Partnerships and Standards, Dolby Laboratories
Chair, DVB CM-AVC Group

Webinar: An Overview of the ATSC 3.0 Interactive Environment

Allowing viewers to interact with television services is an obvious next step for the IP-delivered ATSC service. Taking cues from the European HbbTV standard, the aim here is to make available as many ways as practical for viewers to direct their viewing in order to open up new avenues for television channels and programme creators.

Mark Corl is chair of the TG3/S38: Specialist Group on Interactive Environment. Its aim is to support interactive applications and their companion devices. It has produced the A/344 standard which is based on W3C technologies with APIs which support the needs of broadcast television. It describes the Interactive Environment Content Display model allowing video to be mixed with app graphics as a composite display. Mark is also part of the ATSC group TG3-9 which looks at how the different layers of ATSC 3.0 can communicate with each other where necessary.

From the TG3 group, too, is the Companion Device Concepts A/338 standards document which details discovery of second devices such as smartphones and enabling them to communicate with the ATSC 3.0 receiver.

In this webinar from the IEEE BTS, Mark marries an understanding of these documents with the practical aspects of deploying interactive broadcaster applications to receivers including some of the motivations to do this, such as improving revenue through the introduction of Dynamic Ad Insertion and personalisation.

Register now!
Speakers

Mark Corl Mark Corl
Chair, TG3/S38 Specialist Group on Interactive Environment
Co-chair, TG3-9 AHG on Interlayer Communications in the ATSC 3.0 Ecosystem
Senior Vice President, Emergent Technology Development, Triveni Digital