Video: Low-latency DASH Streaming Using Open Source Tools

Low Latency Dash also known as LL-DASH is a modification of MPEG DASH to allow it to operate with close to two seconds’ latency bringing it down to meet, or beat, standard broadcast signals.

Brightcove’s Bo Zhang starts by outlining the aims and methods of getting there. For instance, he explains, the HTTP 1.1 Chunked Transfer element is key to low-latency streaming as it allows the server to start sending a video segment as its being written, not waiting until the file is complete. LL-DASH also has the ability to state an availability window (‘availabilityTimeOffset’).

As LL-MPEG DASH is a living standard, there are updates on the way: Resync points will allow a player to receive a list of places where it can join a stream using SAP types in the ISO-BMFF spec, the server can send a ‘service description’ to the player which can use the information to adjust its latency. Event messages can now be inserted in the middle of segments.

Bo then moves on to explain that he and the team have set up and experiment to gain experience with LL-DASH and test overall latency. He shows that they decided to stream RTMP out of OBS, into a github project called ‘node-gpac-dash’ then to the dash.js player all. between Boston and Seattle. This test runs at 800×600, 30fps with a bitrate of 2.5Mbps and shows results of between 2.5 and 5 seconds depending on the network conditions.

As Bo moves towards the Q&A, he says that low-latency streaming is less scalable because a TCP connection needs to be kept open between the player and the CDN which is a burden.
Another compromise is that smaller chunk sizes in LL-DASH give reduced latency but IO increases meaning sometimes you may have to increase the chunk sizes (and hence latency of the stream) to allow for better performance. He also adds that adverts are more difficult with low-latency streams due to the short amount of time to request and receive the advertising.

Watch now!</a
More detail about the experiments in this talk can be found in Bo’s
blog post.
Speakers

Bo Zhang Bo Zhang
Staff Video System Engineer, Research
Brightcove

Video: The Video Codec Landscape 2020

2020 has brought a bevvy of new codecs from MPEG. These codecs represent a new recognition that the right codec is the one that fits your hardware and your business case. We have the natural evolution of HEVC, namely VVC which trades on complexity to achieve impressive bit rate savings. There’s a recognition that sometimes a better codec is one that has lower computation, namely LCEVC which enables a step-change in quality for lower-power equipment. And there’s also EVC which has a license-free mode to reduce the risk for companies that prefer low-risk deployments.

Christian Feldmann from Bitmovin takes the stage in this video to introduce these three new contenders in an increasingly busy codec landscape. Christian starts by talking about the incumbents namely AVC, HEVC, VP9 and AV1. He puts their propositions up against the promises of these new codecs which are all at the point of finalisation/publication. With the current codecs, Christian looks at what the hardware and software support is like as well as the licencing.

EVC (Essential Video Codec) is the first focus of the presentation whose headline feature is more reliably licence landscape. The first offer is the baseline profile which has no licencing as it uses technologies that are old enough to be outside of patents. The main profile does require licencing and does allow much better performance. Furthermore, the advanced tools in the main profile can each be turned off individually hence avoiding patents that you don’t want to licence. The hope is that this will encourage the patent holders to licence the technology in a timely manner else the customer can, relatively easily, walk away. Using the baseline only should provide 32% better than AVC and the main profile can give up to a 25% benefit over HEVC.

LCEVC (Low Complexity Enhancement Video Coding) is next which is a new technique for encoding which is actually two codecs working together. It uses a ‘base’ codec at low resolution like AVC, HEVC, AV1 etc. This low fidelity version is then accompanied by enhancement information so that the low-resolution base can be upscaled to the desired resolution can be corrected with relevant edges etc. added. The overall effect is that complexity is kept low. It’s designed as a software codec that can fit into almost any hardware by using the hardware decoders in SoCs/CPUs (i.e. Intel QuickSync) plus the CPU itself which deals with the enhancement application. This ability to fit around hardware makes the codec ideal for improving the decoding capability to existing hardware. It stands up well against AVC providing at least 36% improvement and at worst improves slightly upon HEVC bitrates but with much-reduced encoder computation.

VVC (Versatile Video Coding) is discussed by Christian but not in great detail as Bitmovin will be covering that separately. As an evolution of HEVC, it’s no surprise that bitrate is reduced by at least 40%, though encoding complexity has gone up 10-fold. This is similar to HEVC compared to its predecessor AVC. VVC has some built-in features not delivered as standard before such as special modes for screen content (such as computer games) and 360-degree video.

Free to watch now!

Speaker

Christian Feldmann Christian Feldmann
Lead encoding engineer,
Bitmovin

Video: ATSC 3.0 Seminar Part III

ATSC 3.0 is the US-developed set of transmission standards which is fully embracing IP technology both over the air and for internet-delivered content. This talk follows on from the previous two talks which looked at the physical and transmission layers. Here we’re seeing how IP throughout has benefits in terms of broadening choice and seamlessly moving from on-demand to live channels.

Richard Chernock is back as our Explainer in Chief for this session. He starts by explaining the driver for the all-IP adoption which focusses on the internet being the source of much media and data. The traditional ATSC 1.0 MPEG Transport Stream island worked well for digital broadcasting but has proven tricky to integrate, though not without some success if you consider HbbTV. Realistically, though, ATSC see that as a stepping stone to the inevitable use of IP everywhere and if we look at DVB-I from DVB Project, we see that the other side of the Atlantic also sees the advantages.

But seamlessly mixing together a broadcaster’s on-demand services with their linear channels is only benefit. Richard highlights multilingual markets where the two main languages can be transmitted (for the US, usually English and Spanish) but other languages can be made available via the internet. This is a win in both directions. With the lower popularity, the internet delivery costs are not overburdening and for the same reason they wouldn’t warrant being included on the main Tx.

Richard introduces ISO BMFF and MPEG DASH which are the foundational technologies for delivering video and audio over ATSC 3.0 and, to Richard’s point, any internet streaming services.

We get an overview of the protocol stack to see where they fit together. Richard explains both MPEG DASH and the ROUTE protocol which allows delivery of data using IP on uni-directional links based on FLUTE.

The use of MPEG DASH allows advertising to become more targeted for the broadcaster. Cable companies, Richard points out, have long been able to swap out an advert in a local area for another and increase their revenue. In recent years companies like Sky in the UK (now part of Comcast) have developed technologies like Adsmart which, even with MPEG TS satellite transmissions can receive internet-delivered targeted ads and play them over the top of the transmitted ads – even when the programme is replayed off disk. Any adopter of ATSC 3.0 can achieve the same which could be part of a business case to make the move.

Another part of the business case is that ATSC not only supports 4K, unlike ATSC 1.0, but also ‘better pixels’. ‘Better pixels’ has long been the way to remind people that TV isn’t just about resolution. ‘Better pixels’ includes ‘next generation audio’ (NGA), HDR, Wide Colour Gamut (WCG) and even higher frame rates. The choice of HEVC Main 10 Profile should allow all of these technologies to be used. Richard makes the point that if you balance the additional bitrate requirement against the likely impact to the viewers, UHD doesn’t make sense compared to, say, enabling HDR.

Richard moves his focus to audio next unpacking the term NGA talking about surround sound and object oriented sound. He notes that renderers are very advanced now and can analyse a room to deliver a surround sound experience without having to place speakers in the exact spot you would normally need. Options are important for sound, not just one 5.1 surround sound track is very important in terms of personalisation which isn’t just choosing language but also covers commentary, audio description etc. Richard says that audio could be delivered in a separate pipe (PLP – discussed previously) such that even after the
video has cut out due to bad reception, the audio continues.

The talk finishes looking at accessibility such as picture-in-picture signing, SMPTE Timed Text captions (IMSC1), security and the ATSC 3.0 standards stack.

Watch now!
Speaker

Richard Chernock Richard Chernock
Former CSO,
Triveni Digital

Video: Versatile Video Coding (VVC)

MPEG’s VVC is the next iteration along from HEVC (H.265). Whilst there are other codecs being finalised such as EVC and LCEVC, this talk looks at how VVC builds on HEVC, but also lends its hand to screen content and VR becoming a more versatile codec than HEVC, meeting the world’s changing needs. For an overview of these emerging codecs, this interview covers them all.

VVC is a joint project between ITU-T and MPEG (AKA ISO/IEC). Its aim is to create a 50% encoding efficiency in bitrate for the same quality picture, with the emphasis on higher resolutions, HDR and 10-bit video. At the same time, acknowledging that optimising codecs on natural video is no longer the core requirement for a lot of people. Its versatility comes from being able to encode screen content, independent sub-picture encoding, scalable encoding among others.

Gary Sullivan from Microsoft Technology & Research talks us through what all this means. He starts by outlining the case for a new codec, particularly the reach for another 50% bitrate saving which may come at further computational cost. Gary points out that, as video use continues to increase, anything that can be done to significantly reduce bitrates will either drive down costs or allow people to use video in better ways.

Any codec is a set of tools all working together to create the final product. Some tools are not always needed, say if you are running on a lower-power system, allowing the codec to be tuned for the situation. Gary puts up a list of some of the tools in VVC, many of which are an evolution of the same tool in HEVC, and highlights a few to give an insight into the improvements under the hood.

Gary’s pick of the big hitters in the tool-set are the Adaptive Loop Filter which reduces artefacts and prediction errors, affine motion compensation which provides better motion compensation, triangle partitioning mode which is a high-computation improvement in intra prediction, bi-directional optical flow (BIO) for motion prediction, intra-block copy which is useful for screen content where an identical block is found elsewhere in the same frame.

Gary highlights SCC, Screen Content Coding, which was in HEVC but not in the base profile, this has changed for VVC so all VVC implementations will have SCC whereas very few HEVC implementations do. Reference Picture Resampling (RPR) allows changing resolution from picture to picture where pictures can be stored at a different resolution from the current picture. And independent sub-pictures which allow parts of the video frame to be re-arranged or only for only one region to be decoded. This works well for VR, video conferencing and allows the creation of composite videos without intermediate decoding.

As usual, doing more thinking about how to compress a picture brings further computational demands. MPEG’s LCEVC is the standards body’s way of fighting against this, as notable bitrate improvements are possible even for low-power devices. With VVC, versatility is the aim, however. Decoders see a 60% increase in decode complexity. Whilst MPEG specifications are all about the decoder – hence allowing a lot of ongoing innovation in encoding techniques – current examples are about 8 or 9 times slower. Performance is better for screen content and on higher resolutions. Whilst the coding part of VVC is mature, versatility is still being worked on but the aim is to publishing within about 2 months.

The video finishes with a Q&A that covers implementing DASH into a low-latency video workflow. How CMAF will be specified to use VVC. Live workflows which Gary explains always come after the initial file-based work and is best understood after the first attempts at encoder implementations, noting that hardware lags by 2 years. He goes on to explain that chipmakers need to see the demand. At the moment, there is a lot of focus from implementors on AV1 by implementors, not to mention EVC, so the question is how much demand can be generated.

This talk is based on talk from Benjamin Bross originally given to an ITU workshop (PDF), then presented at Mile High Video by Benjamin and was updated by Gary for this conversation with the Seattle Video Tech community.

Bitmovin has an article highlighting many of the improvements in VVC written by Christian Feldmann who has given many talks on both AV1 and VVC.

Watch now!

Speakers

Gary Sullivan Gary Sullivan
Microsoft Technology & Research