Video: ATSC 3.0 – What You Need to Know

ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago. ATSC 3.0 takes terrestrial broadcasting in to the IP world enabling traditional broadcast to be mixed with internet-based video, entertainment and services as part of one, seamless, experience.

ATSC 3.0 is gaining traction in the US and some other countries as a way to deliver digital video within a single traditional VHF channel – and with the latest 3.0 version, this actually moves to broadcasting IP packets over the air.

Now ready for deployment, in the US ATSC 3.0 is now at a turning point. With a number of successful trials under its belt, it’s now time for the real deployments to start. In this panel discussion as part from TV Technology looks at the groups of stations working together to deploy the standard.

The ‘Transition Guide‘ document is one of the first topics as this video tackles. With minimum in technical detail, this document explains how ATSC 3.0 is intended to work in terms of spectrum, regulatory matters and its technical features and makeup. We then have a chance to see the ‘NextGenTV’ logo released in September for equipment which is confirmed compliant with ATSC 3.0.

ATSC 3.0 is a suite of standards and work is still ongoing. There are 27 standards completed or progress ranging from the basic system itself to captions to signalling. A lot of work is going in to replicating features of the current broadcast systems like full implementation of the early alert system (EAS) and similar elements.

It’s well known that Phoenix Arizona is a test bed for ATSC and next we hear an update on the group of 12 stations which are participating in the adoption of the standard, sharing experiences and results with the industry. We see that they are carrying out trial broadcasts at the moment and will be moving into further testing, including with SFNs (Single Frequency Networks) come 2020. We then see an example timeframe showing an estimated 8-12 months needed to launch a market.

The video approaches the end by looking at case studies with WKAR and ARK multicasting, answering questions such as when will next-gen audio be available, the benefit of SFNs and how it would work with 5G and a look at deploying immersive audio.

Watch now!
Speakers

Pete Sockett Pete Sockett
Director of Engineering & Operations,
WRAL-TV, Raleigh
Mark Aitken Mark Aitken
Senior VP of Advanced Technology, Sinclair Broadcast Group
President of ONE Media 3.0
Dave Folsom Dave Folsom
Consultant,
Pearl TV
Lynn Claudy Lynn Claudy
Chairman of the ATSC board
Senior VP, Technology at NAB
Tom Butts Tom Butts
Content Director,
TV Technology

Video: Wide Area Facilities Interconnect with SMPTE ST 2110

Adoption of SMPTE’s 2110 suite of standards for transport of professional media is increasing with broadcasters increasingly choosing it for use within their broadcast facility. Andy Rayner takes the stage at SMPTE 2019 to discuss the work being undertaken to manage using ST 2110 between facilities. In order to do this, he looks at how to manage the data out of the facility, the potential use of JPEG-XS, timing and control.

Long established practices of using path protection and FEC are already catered for with ST 2022-7 for seamless path protection and ST 2022-5. New to 2110 is the ability to send the separate essences bundled together in a virtual trunk. This has the benefit of avoiding streams being split up during transport and hence potentially suffering different delays. It also helps with FEC efficiency and allows transport of other types of traffic.

Timing is key for ST 2110 which is why it natively uses Precision Timing Protocol, PTP which has been formalised for use in broadcast under ST 2059. Andy highlights the problem of reconciling timing at the far end but also the ‘missed opportunity’ that the timing will usually get regenerated therefore the time of media ingest is lost. This may change over the next year.

The creation of ST 2110-22 includes, for the first time, compressed media into ST 2110. Andy mentions that JPEG XS can be used – and is already being deployed. Control is the next topic with Andy focussing on the secure sharing of NMOS IS-04 & 05 between facilities covering registration, control and the security needed.

The talk ends with questions on FEC Latency, RIST and potential downsides of GRE trunking.

Watch now!
Speaker

Andy Rayner Andy Rayner
Chief Technologist,
Nevion

Video: The challenges of deploying Apple’s Low Latency HLS In Real Life

HLS has taken the world by storm since its first release 10 years ago. Capitalising on the already widely understood and deployed technologise already underpinning websites at the time, it brought with it great scalability and the ability to seamlessly move between different bitrate streams to help deal with varying network performance (and computer performance!). In the beginning, streaming latency wasn’t a big deal, but with multi-million pound sports events being routinely streamed, this has changed and is one of the biggest challenges for streaming media now.

Low-Latency HLS (LL-HLS) is Apple’s way of bringing down latency to be comparable with broadcast television for those live broadcast where immediacy really matters. The release of LL-HLS came as a blow to the community-driven moves to deliver lower latency and, indeed, to adoption of MPEG-DASH’s CMAF. But as more light was shone on the detail, the more questions arose in how this was actually going to work in practice.

Marina Kalkanis from M2A Media explains how they have been working with DAZN and Akamai to get LL-HLS working and what they are learning in this pilot project. Choosing the new segment sizes and how they are delivered is a key first step in ensuring low latency. M2A are testing 320ms sizes which means very frequent requests for playlists and quickly growing playlist files; both are issues which need to be managed.

Marina explains the use of playlist shortening, use of HTTP Push in HTTP2 to reduce latency, integration into the CDN and what the CDN is required to do. Marina finishes by explaining how they are conducting the testing and the status of the project.

Watch now!
Speaker

Marina Kalkanis Marina Kalkanis
CEO,
M2A Media

Video: MPEG-5 EVC

The MPEG-5: Essential Video Codec (EVC) promises to do what no MPEG standard has done before, deliver great improvements in compression and give assurances over patents. With a novel standardisation process, EVC provides a royalty-free base layer plus licensing details are provided upfront.

SMPTE 2019 saw Jonatan Samuelson take us through the details. Founder of Divideon and an editor of the evolving standard. Jonatan starts by explaining the codec landscape in terms of the new and recent codecs coming online showing how EVC differs including from it’s sister codec, VVC in parallel with which EVC is being developed.

Jonatan explains how the patents are being dealt with, comparing to HEVC, he shows that there is a much more simplified range of patent holders. But importantly, the codec has very granular tools to turn on and off separate tools so that you can exclude any that you don’t wish to use for licensing reasons. This is the first time this level of control has been possible. Along with the royalty-free base layer, this codec hopes to provide companies the control they need in order to safely use the codec with predictable costs and without legal challenges.

Target applications for EVC are realtime encoding, video conferencing but also newer ’emerging’ video formats such as 8K with HDR & WCG. To do this, Jonatan explains the different blocks that create the codec itself ahead of walking us through the results.

Watch now!
Speaker

Jonatan Samuelsson Jonatan Samuelsson
Founder,
Divideon