Video: The next enhancement for RIST

Continuing the look at RIST, the developing protocol which allows for reliable streaming over the internet – even in the event of packet loss, we have a look at a key feature on the roadmap.

The core proposition of RIST is to produce an interoperable protocol which brings the internet into the list of ways to contribute and distribute low-latency video. It’s resilient to packet loss due to it’s ability to re-request packets which have been lost yet is light enough for video streaming. In another talk at IBC, we learn about the latest developments which have added security and many other features to the list of capabilities.

Here, Adi Rozenberg from VideoFlow explains how this will further be extended by upcoming work to allow the source stream to reduce in bitrate in response to reduced capacity in the network. With RIST’s ARQ – the technology which requests missing packets – we find that the retransmissions can actually aggravate bitrate constrictions particularly when they are permanent. Adi proposes the only real way to solve lack of bandwidth issues is to reduce the bitrate of the source.

RIST already includes NULL packet removal so that NULL packets aren’t transmitted and are re-inserted at the remote end. This is usually a great start in reducing the bitrate of the stream. However more is needed, we need a way to tell the encoder to reduce the bandwidth of the video stream itself. This can be accomplished by RTCP.

Adi identifies the problem of identifying when extra bandwidth has returned as a reduction of bandwidth is quickly and easily signalled with retransmissions, but excess bandwidth silently returns. The system gradually increases the encoder bandwidth to always be probing the current balance of bandwidth and bitrate.

This works well when there is a single encoder and a single decoder. When there are multiple decoders, life is more difficult. The solution offered to this is to create a ladder of bitrates all of which are adaptable. Now the destination can switch between profiles. This can be extended to MPTS (Multi-Program Transport Streams) whereby, depending on the destination, services in the MPTS are dropped in order to recover bandwidth. A mechanism is used which prioritises services depending on the destination (i.e. German channels are de-prioritised on delivery to France).

The session ends with a Q&A on stream switching details and use in stat mixing.

Watch now!
Speakers

Adi Rozenberg Adi Rozenberg
CTO,
VideoFlow

Video: ATSC 3.0 – What You Need to Know

ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago. ATSC 3.0 takes terrestrial broadcasting in to the IP world enabling traditional broadcast to be mixed with internet-based video, entertainment and services as part of one, seamless, experience.

ATSC 3.0 is gaining traction in the US and some other countries as a way to deliver digital video within a single traditional VHF channel – and with the latest 3.0 version, this actually moves to broadcasting IP packets over the air.

Now ready for deployment, in the US ATSC 3.0 is now at a turning point. With a number of successful trials under its belt, it’s now time for the real deployments to start. In this panel discussion as part from TV Technology looks at the groups of stations working together to deploy the standard.

The ‘Transition Guide‘ document is one of the first topics as this video tackles. With minimum in technical detail, this document explains how ATSC 3.0 is intended to work in terms of spectrum, regulatory matters and its technical features and makeup. We then have a chance to see the ‘NextGenTV’ logo released in September for equipment which is confirmed compliant with ATSC 3.0.

ATSC 3.0 is a suite of standards and work is still ongoing. There are 27 standards completed or progress ranging from the basic system itself to captions to signalling. A lot of work is going in to replicating features of the current broadcast systems like full implementation of the early alert system (EAS) and similar elements.

It’s well known that Phoenix Arizona is a test bed for ATSC and next we hear an update on the group of 12 stations which are participating in the adoption of the standard, sharing experiences and results with the industry. We see that they are carrying out trial broadcasts at the moment and will be moving into further testing, including with SFNs (Single Frequency Networks) come 2020. We then see an example timeframe showing an estimated 8-12 months needed to launch a market.

The video approaches the end by looking at case studies with WKAR and ARK multicasting, answering questions such as when will next-gen audio be available, the benefit of SFNs and how it would work with 5G and a look at deploying immersive audio.

Watch now!
Speakers

Pete Sockett Pete Sockett
Director of Engineering & Operations,
WRAL-TV, Raleigh
Mark Aitken Mark Aitken
Senior VP of Advanced Technology, Sinclair Broadcast Group
President of ONE Media 3.0
Dave Folsom Dave Folsom
Consultant,
Pearl TV
Lynn Claudy Lynn Claudy
Chairman of the ATSC board
Senior VP, Technology at NAB
Tom Butts Tom Butts
Content Director,
TV Technology

Video: Wide Area Facilities Interconnect with SMPTE ST 2110

Adoption of SMPTE’s 2110 suite of standards for transport of professional media is increasing with broadcasters increasingly choosing it for use within their broadcast facility. Andy Rayner takes the stage at SMPTE 2019 to discuss the work being undertaken to manage using ST 2110 between facilities. In order to do this, he looks at how to manage the data out of the facility, the potential use of JPEG-XS, timing and control.

Long established practices of using path protection and FEC are already catered for with ST 2022-7 for seamless path protection and ST 2022-5. New to 2110 is the ability to send the separate essences bundled together in a virtual trunk. This has the benefit of avoiding streams being split up during transport and hence potentially suffering different delays. It also helps with FEC efficiency and allows transport of other types of traffic.

Timing is key for ST 2110 which is why it natively uses Precision Timing Protocol, PTP which has been formalised for use in broadcast under ST 2059. Andy highlights the problem of reconciling timing at the far end but also the ‘missed opportunity’ that the timing will usually get regenerated therefore the time of media ingest is lost. This may change over the next year.

The creation of ST 2110-22 includes, for the first time, compressed media into ST 2110. Andy mentions that JPEG XS can be used – and is already being deployed. Control is the next topic with Andy focussing on the secure sharing of NMOS IS-04 & 05 between facilities covering registration, control and the security needed.

The talk ends with questions on FEC Latency, RIST and potential downsides of GRE trunking.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist,
Nevion

Video: The challenges of deploying Apple’s Low Latency HLS In Real Life

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologise already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!). In the beginning, streaming latency wasn’t a big deal, but with multi-million pound sports events being routinely streamed, this has changed and is one of the biggest challenges for streaming media now.

Low-Latency HLS (LL-HLS) is Apple’s way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters. The release of LL-HLS came as a blow to the community-driven moves to deliver lower latency and, indeed, to adoption of MPEG-DASH’s CMAF. But as more light was shone on the detail, the more questions arose in how this was actually going to work in practice.

Marina Kalkanis from M2A Media explains how they have been working with DAZN and Akamai to get LL-HLS working and what they are learning in this pilot project. Choosing the new segment sizes and how they are delivered is a key first step in ensuring low latency. M2A are testing 320ms sizes which means very frequent requests for playlists and quickly growing playlist files; both are issues which need to be managed.

Marina explains the use of playlist shortening, use of HTTP Push in HTTP2 to reduce latency, integration into the CDN and what the CDN is required to do. Marina finishes by explaining how they are conducting the testing and the status of the project.

Watch now!
Speaker

Marina Kalkanis Marina Kalkanis
CEO,
M2A Media