Continuing our look at ATSC 3.0, our fifth talk straddles technical detail and basic business cases. We’ve seen talks on implementation experience such as in Chicago and Phoenix and now we look at receiving the data in open source.
We’ve covered before the importance of ATSC 3.0 in the North American markets and the others that are adopting it. Jason Justman from Sinclair Digital states the business cases and reasons to push for it despite it being incompatible with previous generations. He then discusses what Software Defined Radio is and how it fits in to the puzzle. Covering the early state of this technology.
With a brief overview of the RF side of ATSC 3.0 which itself is a leap forward, Jason explains how the video layer benefits. Relying on ISO BMMFF, Jason introduces MMT (MPEG Media Transport) explaining what it is and why it’s used for ATSC 3.0.
The next section of the talk showcases libatsc3 whose goal is to open up ATSC 3.0 to talented Software Engineers and is open source which Jason demos. The library allows for live decoding of ATSC 3.0 including MMT material.
Finishing his talk with a Q&A including SCTE 34 and an interesting comparison between DVB-T2 and ATSC 3.0 makes this a very useful talk to fill in technical gaps that no other ATSC 3.0 talk covers.
ATSC 3.0 is the next sea change in North American broadcasting, shared with South Korea, Mexico and other locations. Depending on your viewpoint, this could be as fundamental as the move to digital lockstep with the move to HD programming all those years ago.
ATSC 3.0 takes terrestrial broadcasting in to the IP world meaning everything transmitted over the air is done over IP and it brings with it the ability to split the bandwidth into separate pipes.
Here, Dr. Richard Chernock presents a detailed description of the available features within ATSC. He explains the new constellations and modulation properties delving into the ability to split your transmission bandwidth into separate ‘pipes’. These pipes can have different modulation parameters, robustness etc. The switch from 8VSB to OFDM allows for Single Frequency Networks which can actually help reception (due to guard intervals).
Additionally, the standard supports HEVC and scalable video (SHVC) whereby a single UHD encode can be sent which has an HD base-layer which can be decoded by every decoder plus an ‘enhancement layer’ which can be optionally decoded to produce a full UHD output for those decoders/displays which an support it.
With the move to IP, there is a blurring of broadcast and broadband. This can be used to deliver extra audios via broadband to be played with the main video and can be used as a return path to the broadcaster which can help with interactivity and audience measurement.
Dr. Chernock covers HDR, better pixels and Next Generation Audio as well as Emergency Alerts functionality improvements and accessibility features.
Dr. Richard Chernock
Chief Science Officer,
An in-depth talk explaining DOCSIS 3.1 from SCTE by Cisco’s Ron Hranac. DOCSIS 3.1 is the latest Data-Over-Cable Service Interface Specifications.
The presentation will include information on the following:
– Why DOCSIS 3.1?
– Basic principles of Orthogonal Frequency Division Multiplexing (OFDM).
– Spectrum allocation.
– FEC performance enhancements.
– New Proactive Network Maintenance (PNM) measurements.
CableLabs released version I01 of the new specification in late October 2013. DOCSIS 3.1 introduces a new physical layer, improved Forward Error Correction (FEC) and other features for high-speed data transmission on cable networks. Scalable to 10+ Gbps in the downstream and 1+ Gbps in the upstream, DOCSIS 3.1 supports services competitive with fibre to the home, but using cable’s HFC platform. Cisco’s Ron Hranac provides an overview of DOCSIS 3.1 from a physical layer perspective.