The ever popular, always analytical Jan Ozers spends time here evaluating the quality of these codecs against the ever-present h.264. As the team here at The Broadcast Knowledge takes a short break, we’re recapping the most popular posts of the year. Interestingly, this post is from over a year ago but is still seeing top-10 traffic. This is no surprise since, as I said in my interview with SMPTE on the subject of codecs, everyone touches codecs in some way even if only at home. So it’s no surprise there is such an interest.
Jan takes a careful approach to explaining the penetration adn abilities of h.264 in order to see at what point we can break even and start to ebenefit from using alternative codecs. He then takes each codec in turn looking at it its pros and cons to paint a picture of the options available for those willing and able to go beyond h.264.
AVC, now 16 years old, is long in the tooth but supported by billions of devices. The impetus to replace it comes from the drive to serve customers with a lower cost/base and a more capable platform. Cue the new contenders VVC and AV1 – not to mention HEVC. It’s no surprise they comptes better then AVC (also known as MPEG 4 and h.264) but do they deliver a cost efficient, legally safe codec on which to build a business?
Thierry Fautier has done the measurements and presents them in this talk. Thierry explains that the tests were done using reference code which, though unoptimised for speed, should represent the best quality possible from each codec and compared 1080p video all of which is reproduced in the IBC conference paper.
Licensing is one important topic as, by some, HEVC is seen as a failed codec not in terms of its compression but rather in the réticente by many companies to deploy it which has been due to the business risk of uncertain licensing costs and/or the expense of the known licensing costs. VVC faces the challenge of entering the market and avoiding these concerns which MPEG is determined to do.
Thierry concludes by comparing AVC against HEVC, AV1 and VVC in terms of deployment dates, deployed devices and the deployment environment. He looks at the challenge of moving large video libraries over to high-complexity codecs due to cost and time required to re-compress. The session ends with questions from the audience. Watch now! Speaker
President-Chair at Ultra HD Forum,
VP Video Strategy, Harmonic
MPEG-DASH has been in increasing use for many years and while the implementations and versions continue to improve and add new features, the core of its function remains the same and is the topic of this talk.
For anyone looking for an introduction to multi-bitrate streaming, this talk from Thomas Kernen is a great start as he charts the way streaming has progressed from the initial ‘HTTP progressive download’ to dynamic streaming which adapts to your bandwidth constraints.
Thomas explains the way that players and servers talk and deliver files and summarises the end-to-end distribution ecosystem. He covers the fact that MPEG DASH standardises the container description information, captioning and other aspects. DRM is available through the common encryption scheme.
MPD files, the manifest text files, which are the core of MPEG-DASH are next under the spotlight. Thomas talks us through the difference between Media Presentations, Periods, Representations and Segment Info. We then look at the ability to use the ISO BMFF format or MPEG-2 TS like HLS.
The DASH Industry Forum, DASH-IF, is an organisation which promotes the use of DASH within businesses which means that not only do they do work in spreading the word of what DASH is and how it can be helpful, but they also support interoperability. DASH264 is also the output from the DASH-IF and Thomas describes how this specification of using DASH helps with interoperability.
Buffer bloat is still an issue today which is a phenomenon where for certain types of traffic, the buffers upstream and locally in someone’s network can become perpetually full resulting in increased latency in a stream and potentially instability. Thomas looks briefly at this before moving on to HEVC.
At the time of this talk, HEVC was still new and much has happened to it since. This part of the talk gives a good introduction to the reasons that HEVC was brought into being and serves as an interesting comparison for the reasons that VVC, AV1, EVC and other codecs today are needed.
For the latest on DASH, check out the videos in the list of related posts below.
FPGAs are flexible, reprogrammable chips which can do certain tasks faster than CPUs, for example, video encoding and other data-intensive tasks. Once the domain of expensive hardware broadcast appliances, FPGAs are now available in the cloud allowing for cheaper, more flexible encoding.
In fact, according to NGCodec founder Oliver Gunasekara, video transcoding makes up a large percentage of cloud work loads and this increasing year on year. The demand for more video and the demand for more efficiently-compressed video both push up the encoding requirements. HEVC and AV1 both need much more encoding power than AVC, but the reduced bitrate can be worth it as long as the transcoding is quick enough and the right cost.
Oliver looks at the likely future adoption of new codecs is likely to playout which will directly feed into the quality of experience: start-up time, visual quality, buffering are all helped by reduced bitrate requirements.
It’s worth looking at the differences and benefits of CPUs, FPGAs and ASICs. The talk examines the CPU-time needed to encode HEVC showing the difficulty in getting real-time frame rates and the downsides of software encoding. It may not be a surprise that NGCodec was acquired by FPGA manufacturer Xilinx earlier in 2019. Oliver shows us the roadmap, as of June 2019, of the codecs, VQ iterations and encoding densities planned.
The talk finishes with a variety of questions like the applicability of Machine Learning on encoding such as scene detection and upscaling algorithms, the applicability of C++ to Verilog conversion, the need for a CPU for supporting tasks.